A python interface for training Reinforcement Learning bots to battle on pokemon showdown

Overview

The pokemon showdown Python environment

PyPI version fury.io PyPI pyversions License: MIT Documentation Status

A Python interface to create battling pokemon agents. poke-env offers an easy-to-use interface for creating rule-based or training Reinforcement Learning bots to battle on pokemon showdown.

A simple agent in action

Getting started

Agents are instance of python classes inheriting from Player. Here is what your first agent could look like:

class YourFirstAgent(Player):
    def choose_move(self, battle):
        for move in battle.available_moves:
            if move.base_power > 90:
                # A powerful move! Let's use it
                return self.create_order(move)

        # No available move? Let's switch then!
        for switch in battle.available_switches:
            if switch.current_hp_fraction > battle.active_pokemon.current_hp_fraction:
                # This other pokemon has more HP left... Let's switch it in?
                return self.create_order(switch)

        # Not sure what to do?
        return self.choose_random_move(battle)

To get started, take a look at our documentation!

Documentation and examples

Documentation, detailed examples and starting code can be found on readthedocs.

Installation

This project requires python >= 3.6 and a Pokemon Showdown server.

pip install poke-env

You can use smogon's server to try out your agents against humans, but having a development server is strongly recommended. In particular, it is recommended to use the --no-security flag to run a local server with most rate limiting and throttling turned off. Please refer to the docs for detailed setup instructions.

git clone https://github.com/smogon/pokemon-showdown.git
cd pokemon-showdown
npm install
cp config/config-example.js config/config.js
node pokemon-showdown start --no-security

Development version

You can also clone the latest master version with:

git clone https://github.com/hsahovic/poke-env.git

Dependencies and development dependencies can then be installed with:

pip install -r requirements.txt
pip install -r requirements-dev.txt

Acknowledgements

This project is a follow-up of a group project from an artifical intelligence class at Ecole Polytechnique.

You can find the original repository here. It is partially inspired by the showdown-battle-bot project. Of course, none of these would have been possible without Pokemon Showdown.

Team data comes from Smogon forums' RMT section.

Data

Data files are adapted version of the js data files of Pokemon Showdown.

License

License: MIT

Other

CircleCI codecov Code style: black

Comments
  • Gen 8 support

    Gen 8 support

    As discussed in #20

    Copying over a checklist by @hsahovic that he mentioned here in order to keep track of what's left

    Things to update:

    • [x] src/data/pokedex.json
    • [x] src/data/moves.json
    • [x] src/environment/Battle.battle_parse_message
    • [x] src/environment/Battle.battle_parse_request
    • [x] src/player/Player.choose_random_move

    Things to add:

    • [x] src/environment/Battle.can_dynamax
    • [x] src/environment/Pokemon._dynamax

    Things to add if necessary:

    • [ ] src/environment/Battle.can_gigantamax
    • [ ] src/environment/Pokemon._gigantamax
    • [x] src/environment/Pokemon.is_dynamaxed
    • [ ] src/environment/Pokemon.is_gigantamaxed
    documentation enhancement 
    opened by szymonWojdat 24
  • Calling the gym-env by name

    Calling the gym-env by name

    Hi Haris! First of all thank you for putting in the effort of making poke-env.

    I run your rl_with_open_ai_gym_wrapper.py and tried a bunch of other RL algorithms in keras-rl2 on the play_against method and they worked just fine.

    Then naturally I would like to get poke-env working on other newer and better maintained RL libraries than keras-rl2.
    I tried to get RLlib working with poke-env, specifically with the plain_against method but couldn't get it to work.

    RLlib's training flow goes like this (code copied from RLlib's doc )

    ray.init()
    config = ppo.DEFAULT_CONFIG.copy()
    config["num_gpus"] = 0
    config["num_workers"] = 1
    config["eager"] = False
    trainer = ppo.PPOTrainer(config=config, env="CartPole-v0")
    
    # Can optionally call trainer.restore(path) to load a checkpoint.
    
    for i in range(1000):
       # Perform one iteration of training the policy with PPO
       result = trainer.train()
       print(pretty_print(result))
    
       if i % 100 == 0:
           checkpoint = trainer.save()
           print("checkpoint saved at", checkpoint)
    

    where the whole gym-env class is passed to the trainer object.
    3 days of try and error concludes that there is no workaround between this syntax and the play_against method in poke-env.

    I wonder if it's possible to wrap poke-env into a registered gym-env and make it callable by its name,
    like env gym.make('poke_env-v0') ?

    bug enhancement 
    opened by lolanchen 19
  • file missing repl

    file missing repl

    C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown>node pokemon-showdown start --no-security RESTORE CHATROOM: lobby RESTORE CHATROOM: staff

    CRASH: Error: ENOENT: no such file or directory, scandir 'C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\logs\repl' at Object.readdirSync (fs.js:1043:3) at _class.start (C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\lib\repl.ts:12:16) at Object. (C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\server\index.ts:173:11) at Module._compile (internal/modules/cjs/loader.js:1072:14) at Module.m._compile (C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\node_modules\ts-node\src\index.ts:1310:23) at Module._extensions..js (internal/modules/cjs/loader.js:1101:10) at Object.require.extensions. [as .ts] (C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\node_modules\ts-node\src\index.ts:1313:12) at Module.load (internal/modules/cjs/loader.js:937:32) at Function.Module._load (internal/modules/cjs/loader.js:778:12) at Object. (C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\pokemon-showdown:134:22)

    SUBCRASH: Error: ENOENT: no such file or directory, open 'C:\Users\CyberKira\Documents\pokemon showdown\pokemon-showdown\logs\errors.txt'

    Worker 1 now listening on 0.0.0.0:8000 Test your server at http://localhost:8000 image

    question 
    opened by godpow 16
  • Doubles support

    Doubles support

    Closes #49

    Drafted this real quick for now, will keep working on this in the following days.

    For now the plan was to create AbstractBattle, use DoubleBattle for doubles and Battle for singles, move all common logic from Battle to AbstractBattle and implement the missing functionalities of DoubleBattle.

    enhancement 
    opened by szymonWojdat 16
  • Error encountered during player.ladder()

    Error encountered during player.ladder()

    Hi, I was testing a model I trained on Pokemon Showdown (code snippet below) when I ran into this issue. I'm able to challenge the bot to a battle and play against it perfectly well but when I do player.ladder(100) it errors out after completing a single battle.

    2022-07-25 18:33:47,574 - UABGLSimpleDQN - ERROR - Unhandled exception raised while handling message:
    >battle-gen8randombattle-1625188644
    |-message|Nukkumatti lost due to inactivity.
    |
    |win|UABGLSimpleDQN
    Traceback (most recent call last):
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\player_network_interface.py", line 131, in _handle_message
        await self._handle_battle_message(split_messages)
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\player.py", line 235, in _handle_battle_message
        self._battle_finished_callback(battle)
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\env_player.py", line 106, in _battle_finished_callback
        self._observations[battle].put(self.embed_battle(battle))
    KeyError: <poke_env.environment.battle.Gen8Battle object at 0x000001E1988D2EA0>
    Task exception was never retrieved
    future: <Task finished name='Task-39' coro=<PlayerNetwork._handle_message() done, defined at E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\player_network_interface.py:117> exception=KeyError(<poke_env.environment.battle.Gen8Battle object at 0x000001E1988D2EA0>)>
    Traceback (most recent call last):
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\player_network_interface.py", line 177, in _handle_message
        raise exception
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\player_network_interface.py", line 131, in _handle_message
        await self._handle_battle_message(split_messages)
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\player.py", line 235, in _handle_battle_message
        self._battle_finished_callback(battle)
      File "E:\Dev\meta-discovery\torch_env\lib\site-packages\poke_env\player\env_player.py", line 106, in _battle_finished_callback
        self._observations[battle].put(self.embed_battle(battle))
    KeyError: <poke_env.environment.battle.Gen8Battle object at 0x000001E1988D2EA0>
    

    Model code:

    class SimpleRLPlayerTesting(SimpleRLPlayer):
        def __init__(self, model, *args, **kwargs):
            SimpleRLPlayer.__init__(self, *args, **kwargs)
            self.model = model
    
        def choose_move(self, battle):
            state = self.embed_battle(battle)
            with torch.no_grad():
                predictions = self.model(state)
            action_mask = self.action_masks()
            action = np.argmax(predictions + action_mask)
            return self._action_to_move(action, battle)
    

    Script:

    async def main():
        ...
        player = simple_agent.SimpleRLPlayerTesting(
            model=model,
            player_configuration=PlayerConfiguration(USERNAME, PASSWORD),
            server_configuration=ShowdownServerConfiguration,
            start_timer_on_battle_start=True,
            **player_kwargs
        )
        print("Connecting to Pokemon Showdown...")
        await player.ladder(NUM_GAMES)
        # Print the rating of the player and its opponent after each battle
        for battle in player.battles.values():
            print(battle.rating, battle.opponent_rating)
    
    if __name__ == "__main__":
        asyncio.get_event_loop().run_until_complete(main())
    
    opened by akashsara 15
  • Training the agent against 2 or more opponents + Max moves are stored and shouldn't

    Training the agent against 2 or more opponents + Max moves are stored and shouldn't

    Hey Haris, I would like to train the agent against more than one opponent, but can't figure out how. Also, how could I make it train against itself and previous saved versions of it?

    bug question 
    opened by mancho1987 13
  • (None, 22) Tensor is is not an element of this graph, when trying to get examples/rl_with_open_ai_gym_wrapper.py to work

    (None, 22) Tensor is is not an element of this graph, when trying to get examples/rl_with_open_ai_gym_wrapper.py to work

    Hi, I was trying to run examples/rl_with_open_ai_gym_wrapper.py on an Ubuntu 20.04 system natively just from terminal, and I keep on running into this error

    ValueError: Tensor Tensor("dense_2/BiasAdd:0", shape=(None, 22), dtype=float32) is not an element of this graph.
    

    from here

    Training for 10000 steps ...
    Interval 1 (0 steps performed)
    /home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/engine/training.py:2470: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
      warnings.warn('`Model.state_updates` will be removed in a future version. '
    Exception in thread Thread-1:
    Traceback (most recent call last):
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/threading.py", line 973, in _bootstrap_inner
        self.run()
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/threading.py", line 910, in run
        self._target(*self._args, **self._kwargs)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/poke_env/player/env_player.py", line 363, in <lambda>
        target=lambda: env_algorithm_wrapper(self, env_algorithm_kwargs)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/poke_env/player/env_player.py", line 347, in env_algorithm_wrapper
        env_algorithm(player, **kwargs)
      File "/home/karna/Documents/poke-net/net/test2.py", line 79, in dqn_training
        dqn.fit(player, nb_steps=nb_steps)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/rl/core.py", line 168, in fit
        action = self.forward(observation)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/rl/agents/dqn.py", line 224, in forward
        q_values = self.compute_q_values(state)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/rl/agents/dqn.py", line 68, in compute_q_values
        q_values = self.compute_batch_q_values([state]).flatten()
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/rl/agents/dqn.py", line 63, in compute_batch_q_values
        q_values = self.model.predict_on_batch(batch)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/engine/training_v1.py", line 1200, in predict_on_batch
        self._make_predict_function()
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/engine/training_v1.py", line 2070, in _make_predict_function
        self.predict_function = backend.function(
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/backend.py", line 4092, in function
        return GraphExecutionFunction(
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/keras/backend.py", line 3885, in __init__
        with tf.control_dependencies([self.outputs[0]]):
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 5394, in control_dependencies
        return get_default_graph().control_dependencies(control_inputs)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 4848, in control_dependencies
        c = self.as_graph_element(c)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 3759, in as_graph_element
        return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
      File "/home/karna/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 3838, in _as_graph_element_locked
        raise ValueError("Tensor %s is not an element of this graph." % obj)
    ValueError: Tensor Tensor("dense_2/BiasAdd:0", shape=(None, 22), dtype=float32) is not an element of this graph.
    

    I thought this was a compatability error, so I tried to install older tensorflow versions as I saw this was using tensorflow==2.0.0b1 and they mentioned it working.

    However, I never actually managed to get tensorflow==2.0.0b1. I did get 2.0.0 though in Python 3.7.12, but I get the same issue.

      File "/home/karna/.pyenv/versions/3.7.12/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3689, in _as_graph_element_locked
        raise ValueError("Tensor %s is not an element of this graph." % obj)
    ValueError: Tensor Tensor("dense_2/BiasAdd:0", shape=(None, 22), dtype=float32) is not an element of this graph.
    

    My installed packages with python 3.9.7, let me know if what you want the 3.7.12 config, but I don't think the tensor flow version is necessarily the issue.

    astunparse==1.6.3
    cachetools==4.2.4
    certifi==2021.10.8
    charset-normalizer==2.0.7
    clang==5.0
    cloudpickle==2.0.0
    flatbuffers==1.12
    gast==0.4.0
    google-auth==2.3.3
    google-auth-oauthlib==0.4.6
    google-pasta==0.2.0
    grpcio==1.41.1
    gym==0.21.0
    h5py==3.1.0
    idna==3.3
    keras==2.6.0
    Keras-Preprocessing==1.1.2
    keras-rl2==1.0.5
    Markdown==3.3.4
    numpy==1.19.5
    oauthlib==3.1.1
    opt-einsum==3.3.0
    orjson==3.6.4
    poke-env==0.4.20
    protobuf==3.19.1
    pyasn1==0.4.8
    pyasn1-modules==0.2.8
    requests==2.26.0
    requests-oauthlib==1.3.0
    rsa==4.7.2
    six==1.15.0
    tabulate==0.8.9
    tensorboard==2.7.0
    tensorboard-data-server==0.6.1
    tensorboard-plugin-wit==1.8.0
    tensorflow==2.6.1
    tensorflow-estimator==2.6.0
    termcolor==1.1.0
    typing-extensions==3.7.4.3
    urllib3==1.26.7
    websockets==10.0
    Werkzeug==2.0.2
    wrapt==1.12.1
    
    question 
    opened by EllangoK 11
  • Add ping interval and timeout to player

    Add ping interval and timeout to player

    Sometimes showdown server could suffer from a lag spike, expecially on less powerful machines. This allows to increase or disable timeouts for the keepalive mechanism used by websockets.

    @hsahovic let me know if you think this could be useful

    opened by MatteoH2O1999 9
  • battle_tag is wrong for invite-only battles

    battle_tag is wrong for invite-only battles

    My AI kept randomly not doing anything in some battles, so I did some investigating.

    Turns out these errors were being put into the log:

    INFO - >>> battle-gen8randombattle-1220084985|/timer on
    DEBUG - <<< |pm|!Geniusect-Q|~|/error /timer - must be used in a chat room, not a console
    

    The root cause:

    2020-11-09 11:54:58,144 - Geniusect-Q - DEBUG - Received message to handle: >battle-gen8randombattle-1220084985-pgoxls8251t2qkotmilabhfhov3uhlepw
    |init|battle
    |title|Geniusect-Q vs. yaasgagaga
    |raw|<div class="broadcast-red"><strong>This battle is invite-only!</strong><br />Users must be invited with <code>/invite</code> (or be staff) to join</div>
    |j|☆Geniusect-Q
    
    2020-11-09 11:54:58,145 - Geniusect-Q - INFO - New battle started: battle-gen8randombattle-1220084985
    

    As you can see, the battle's "real" ID is battle-gen8randombattle-1220084985-pgoxls8251t2qkotmilabhfhov3uhlepw, but because of the way that the battle_tag is being split, the battle_tag is battle-gen8randombattle-1220084985.

    Would it be possible to throw an exception when /error is received in a PM (to identify when this happens) and also to fix how the battle_tag is generated for invite-only battles? I think this is a recent change, since I wasn't seeing it a couple days ago.

    bug 
    opened by Jay2645 9
  • Sending or accepting challenges gives an error about coroutines

    Sending or accepting challenges gives an error about coroutines

    If you call player.send_challenges or player.accept_challenges, you get this error:

    RuntimeWarning: coroutine 'final_tests' was never awaited final_tests()

    If you wrap it under an async function and call it with await, you get this:

    RuntimeError: Task <Task pending coro=<final_tests() running at pokerl.py:98> cb=[_run_until_complete_cb() at C:\Users\Username\Anaconda3\lib\asyncio\base_events.py:158]> got Future attached to a different loop

    bug 
    opened by Gummygamer 9
  • Dockerized runs of poke-env fail to access port 8000 during imports

    Dockerized runs of poke-env fail to access port 8000 during imports

    Hello! This is my first time submitting an issue on Github so I'm a bit scared 😓

    I'm basically trying to dockerize the training process of a DQN bot to keep my local environment clean. I've made a dockerized version of the pokemon showdown server that runs with the '--no-security' flag (docker.io/nacharya114/pokemonshowdown), and I've created a jupyter notebook docker stack that mounts to my working directory that runs pokebot. I'm getting errors at import that read:

    2021-05-05 17:14:24,768 - RandomPlayer 1 - ERROR - Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 8000), [Errno 99] Cannot assign requested address
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 1010, in create_connection
        sock = await self._connect_sock(
      File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 924, in _connect_sock
        await self.sock_connect(sock, address)
      File "/opt/conda/lib/python3.8/asyncio/selector_events.py", line 496, in sock_connect
        return await fut
      File "/opt/conda/lib/python3.8/asyncio/futures.py", line 260, in __await__
        yield self  # This tells Task to wait for completion.
      File "/opt/conda/lib/python3.8/asyncio/tasks.py", line 349, in __wakeup
        future.result()
      File "/opt/conda/lib/python3.8/asyncio/futures.py", line 178, in result
        raise self._exception
      File "/opt/conda/lib/python3.8/asyncio/selector_events.py", line 528, in _sock_connect_cb
        raise OSError(err, f'Connect call failed {address}')
    ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 8000)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/poke_env/player/player_network_interface.py", line 242, in listen
        async with websockets.connect(
      File "/opt/conda/lib/python3.8/site-packages/websockets/legacy/client.py", line 604, in __aenter__
        return await self
      File "/opt/conda/lib/python3.8/site-packages/websockets/legacy/client.py", line 622, in __await_impl__
        transport, protocol = await self._create_connection()
      File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 1033, in create_connection
        raise OSError('Multiple exceptions: {}'.format(
    OSError: Multiple exceptions: [Errno 111] Connect call failed ('127.0.0.1', 8000), [Errno 99] Cannot assign requested address
    

    All the cell is doing is:

    # Imports
    import dotmap
    import json
    import asyncio
    import os
    import importlib
    
    from poke_env.server_configuration import ServerConfiguration
    from poke_env.player.random_player import RandomPlayer
    from poke_env.player_configuration import PlayerConfiguration
    from poke_env.player.baselines import MaxBasePowerPlayer, SimpleHeuristicsPlayer
    
    from pokebot import BotPlayer
    
    
    
    PIPELINE_PATH = os.path.join(os.curdir, "hparams.json")
    SAVE_PATH = os.path.join(os.curdir, "fakemodel.h5")
    
    my_server_config = ServerConfiguration(
        "ps:8000",
        "https://play.pokemonshowdown.com/action.php?"
    )
    

    I've run these two docker containers via compose and have used ping to verify that they can talk to eachother via tcp. I'm a little bit stuck on why the process of importing is causing a block due to localhost 8000 being unavailable. Thanks to anyone whom this concerns!

    question 
    opened by nacharya114 8
  • Bump peter-evans/create-pull-request from 3 to 4

    Bump peter-evans/create-pull-request from 3 to 4

    Bumps peter-evans/create-pull-request from 3 to 4.

    Release notes

    Sourced from peter-evans/create-pull-request's releases.

    Create Pull Request v4.0.0

    Breaking changes

    • The add-paths input no longer accepts -A as a valid value. When committing all new and modified files the add-paths input should be omitted.
    • If using self-hosted runners or GitHub Enterprise Server, there are minimum requirements for v4 to run. See "What's new" below for details.

    What's new

    • Updated runtime to Node.js 16
      • The action now requires a minimum version of v2.285.0 for the Actions Runner.
      • If using GitHub Enterprise Server, the action requires GHES 3.4 or later.

    What's Changed

    New Contributors

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v3.14.0...v4.0.0

    Create Pull Request v3.14.0

    This release reverts a commit made to bump the runtime to node 16. It inadvertently caused an issue for users on GitHub Enterprise. Apologies. 🙇‍♂️

    What's Changed

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v3.13.0...v3.14.0

    Create Pull Request v3.13.0

    What's Changed

    New Contributors

    Full Changelog: https://github.com/peter-evans/create-pull-request/compare/v3.12.1...v3.13.0

    Create Pull Request v3.12.1

    What's Changed

    New Contributors

    ... (truncated)

    Commits
    • 2b011fa fix: add check for missing token input (#1324)
    • 331d02c fix: support github server url for pushing to fork (#1318)
    • d7db273 fix: handle update after force pushing base to a new commit (#1307)
    • ee93d78 test: set default branch to main (#1310)
    • 6c704eb docs: clarify limitations of push-to-fork with restricted token
    • 88bf0de docs: correct examples
    • b38e8b0 docs: replace set-output in example
    • b4d5173 feat: switch proxy implementation (#1269)
    • ad43dcc build(deps): bump @​actions/io from 1.1.1 to 1.1.2 (#1280)
    • c2f9cef build(deps): bump @​actions/exec from 1.1.0 to 1.1.1 (#1279)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 2
  • argument error in rl_with_new_open_ai_gym_wrapper.py

    argument error in rl_with_new_open_ai_gym_wrapper.py

    hello! in line 76 an error is raised because SimpleRLPlayer is missing the opponent argument, commenting out the testing the environment portion still allows the rest of the code to run smoothly

    opened by fbkhalif 0
  • Error: read ECONNRESET when using Ubuntu 20.04 on WSL2

    Error: read ECONNRESET when using Ubuntu 20.04 on WSL2

    When following the Getting Started guide, I am getting CRASH: Error: read ECONNRESET when running node pokemon-showdown start --no-security using Ubuntu 20.04 on WSL2

    `RESTORE CHATROOM: lobby RESTORE CHATROOM: staff

    CRASH: Error: read ECONNRESET at Pipe.onStreamRead (node:internal/stream_base_commons:217:20) at Pipe.callbackTrampoline (node:internal/async_hooks:130:17)

    CRASH: Error: read ECONNRESET at Pipe.onStreamRead (node:internal/stream_base_commons:217:20) at Pipe.callbackTrampoline (node:internal/async_hooks:130:17)

    CRASH: Error: read ECONNRESET at Pipe.onStreamRead (node:internal/stream_base_commons:217:20) at Pipe.callbackTrampoline (node:internal/async_hooks:130:17)

    Worker 1 now listening on 0.0.0.0:8000 Test your server at http://localhost:8000`

    When attempting to run RandomPlayer in the documentation I also get the following which never resolves itself: RandomPlayer 1 - WARNING - Popup message received: |popup|The server is restarting. Battles will be available again in a few minutes.

    opened by kasmith11 3
  • Wrappers for CLI functions?

    Wrappers for CLI functions?

    A nice-to-have would be some basic wrappers for the CLI tools. For example, for testing VGC applications being able to call the CLI command "generate-team gen8vgc2022 12345" with a python function. I might be able to work on this after I learn a bit more but pretty rusty with python (not that I ever knew it that well).

    opened by melondonkey 1
  • Include items.json in data?

    Include items.json in data?

    Hi! I'm not sure if this has been discussed before (couldn't find anything on it) but is there a reason items aren't included there? Everything else seems to be there. I know that the items aren't necessarily used anywhere in poke-env itself, but I think it would still be helpful for users creating more complex states.

    In my own case, I'm using the files in the data folder to get information on Pokemon, abilities, moves, types, stats etc. However I have to rely on some form of external source for the items. This means that I might have outdated information unless I directly use Showdown as a source. Since there seems to be some form of automated data updates for the information in the data folder, would it be possible to include items there as well?

    opened by akashsara 0
Releases(0.5.0)
  • 0.5.0(Aug 25, 2022)

    • Rework gym API to be synchronous
    • Add misc effects
    • Fix misc bugs
    • Update data
    • Drop python 3.6 support and add python 3.10
    • Revamp inits

    Thanks to @MatteoH2O1999 and @akashsara for your amazing work :)

    Source code(tar.gz)
    Source code(zip)
  • 0.4.21(Nov 11, 2021)

    • Add replay saving feature to Player objects - use Player(..., save_replays=True) to try it out!
    • Unify ability representation
    • Better handling of hiddenpower, especially in gens < 8
    • Add missing AbstractBattle abstract_property values
    • Add Battle.opponent_can_mega_evolve / Battle.opponent_can_z_move properties
    Source code(tar.gz)
    Source code(zip)
  • 0.4.20(Oct 8, 2021)

    • Update data files
    • Improve move message parsing
    • Better capping of number of moves (to 4)
    • Clarify Battle.weather typing

    Minor:

    • calling EnvPlayer.step when reset wasn't called raises an Exception, as per some open ai gym implementations
    Source code(tar.gz)
    Source code(zip)
  • 0.4.19(Sep 20, 2021)

    This release adds:

    • handling of -swapsideconditions messages
    • log.info /log pm messages instead of log.warning them
    • Fix hidden power Moves initialization - their type is now correctly inferred
    • Fix hanging env_player.reset when called before the current battle is finished
    • env_player.complete_current_battle now forfeits instead of performing random moves until the battle finishes
    • env_player.step(-1) forfeits the battle
    Source code(tar.gz)
    Source code(zip)
  • 0.4.17(Aug 19, 2021)

    • Coroutines are no longer stored in PlayerNetworkInterface - this should eliminate a bug where keeping Player objects around for a great number of battles lead to a monotonic increase in RAM usage
    • Set Pokemon object ability property when there's only one possibility
    • Better ability parsing
    • Better items parsing
    Source code(tar.gz)
    Source code(zip)
  • 0.4.16(Jul 21, 2021)

  • 0.4.15(May 26, 2021)

    This release lets users set gen 1, 2 and 3 formats. poke-env will fallback to gen 4 objects and log a warning, as opposed to raising an obscure exception, as in previous versions.

    Misc: removed ailogger dependency

    Source code(tar.gz)
    Source code(zip)
  • 0.4.14(May 23, 2021)

  • 0.4.13(May 14, 2021)

    • Fix a bug causing toxic counter to increment incorrectly when a Pokemon was switched out
    • Fix a bug causing multiple terrains to be present in Battle.fields simultaneously
    Source code(tar.gz)
    Source code(zip)
  • 0.4.12(Apr 25, 2021)

    Previous version of poke-env used the same data files for every generation. This led to incorrect data being used for older formats. This release adds gen specific move and pokedex files, for gens 4 to 8. It also add a handler for message and -message battle messages.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.11(Apr 17, 2021)

  • 0.4.10(Apr 5, 2021)

  • 0.4.9(Apr 1, 2021)

    This release introduces two minor bug fixes:

    • REFLECT was added to effects, as it is used as such in gen 1 by pokemon showdown
    • A typo in Pokemon._start_effect was preventing some effects from being tracked correctly
    Source code(tar.gz)
    Source code(zip)
  • 0.4.8(Feb 27, 2021)

    • Add dynamax parsing / generation in teambuilder submodule
    • Switch pokemon-showdown recommended version from custom fork to smogon master with custom CLI flags
    • Add Effect counters
    • Add Weather counters
    • Add Field counters
    • Add first turn per pokemon tracking
    • Add status counters
    • Parse cant messages
    • Add sets of special Effects and Moves and matching properties in their respective classes

    Misc bug fixes:

    • Fix a rare issue in vgc hp parsing where a g character can appear in hp update information
    • Add CELEBRATE and G_MAX_CHI_STRIKE effect
    • Metronome battles are now double battles
    • Add dedicated powerherb and sky drop dedicated cases for battle state tracking
    Source code(tar.gz)
    Source code(zip)
  • 0.4.7(Dec 24, 2020)

    This release transforms the way side conditions are tracked in Battle objects: instead of being represented as a set of SideCondition objects, aide conditions are now dict mapping SideCondition to ints.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.6(Dec 24, 2020)

    This release adds an init option to each Player class: start_timer_on_battle_start. Setting it to True will make the player request the timer at the start of each battle.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.5(Dec 22, 2020)

    This release introduces preliminary gen 4, 5 and 6 support. It introduces and / or adapts missing mechanisms and showdown quirks that are specific to these past gens. Additionally, it:

    • Adds Gen4EnvSinglePlayer, Gen5EnvSinglePlayer and Gen6EnvSinglePlayer classes
    • Adds defaults battle formats for EnvPlayer children classes. In particular, GenXEnvSinglePlayer now automatically defaults to genXrandombattles. EnvPlayer defaults to gen8randombattles.
    • Fixes misc issues in Effect names.
    Source code(tar.gz)
    Source code(zip)
  • 0.4.4(Dec 16, 2020)

    This releases adapts to updates in showdown's protocol and introduces a more tolerant system for dealing with unrecognized showdown messages (ie. log a warning instead of raising an exception).

    Source code(tar.gz)
    Source code(zip)
  • 0.4.3(Dec 1, 2020)

  • 0.4.2(Nov 26, 2020)

    This release is focused on performance optimization, and includes a reduction of ~30% to 40% in message handling time. This is mainly achieved through a more performant json de serialization library, better caching and many minor optimizations.

    Additionally, a couple of missing Enums were also added.

    This release also includes BattleOrder objects, which are wrappers of messages sent to showdown during a battle.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Nov 15, 2020)

    This version introduces preliminary doubles support!

    The Battle backend has also been refactored into AbstractBattle, Battle and DoubleBattle.

    Thanks @szymonWojdat for your amazing work!

    Source code(tar.gz)
    Source code(zip)
  • 0.3.10(Nov 9, 2020)

    This release introduces a minor bug fix regarding battle tag parsing; private battles (eg. invite-only) should now be managed properly. For more information, please see #88.

    Source code(tar.gz)
    Source code(zip)
  • 0.3.9(Nov 2, 2020)

  • 0.3.8(Oct 27, 2020)

  • 0.3.7(Oct 22, 2020)

  • 0.3.6(Sep 16, 2020)

    This release introduces a couple of minor bugfixes:

    • In showdown team parsing, pokemons without items caused errors as per #65
    • Showdown t: messages could cause errors
    • swap-boosts messages were not correctly parsed
    • SHP had a couple of small quirks (#61)

    Additionally, this release also adds raw stats utilities contributed by @dmizr (#54).

    Source code(tar.gz)
    Source code(zip)
  • 0.3.5(Jun 8, 2020)

    This releases fixes a bug arising when special moves (eg. recharge) are used in Pokemon.damage_multiplier by introducing a default returned value of 1 when the argument move has no type.

    Other changes:

    • RandomPlayer can be found in poke_env.player.baselines as well as poke_env.player.random_player.
    • Add choose_default_move method to Player.
    • An infinite battle loop could arose with baseline players in very specific situations related to ditto. A fail-safe mechanism has been introduced to evade them: when specific errors are encountered in a battle, the selected move has a small chance of being showdown's default order.
    Source code(tar.gz)
    Source code(zip)
Owner
Haris Sahovic
Data Scientist @ eochgroup
Haris Sahovic
keyframes-CNN-RNN(action recognition)

keyframes-CNN-RNN(action recognition) Environment: python=3.7 pytorch=1.2 Datasets: Following the format of UCF101 action recognition. Run steps: Mo

4 Feb 09, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
DvD-TD3: Diversity via Determinants for TD3 version

DvD-TD3: Diversity via Determinants for TD3 version The implementation of paper Effective Diversity in Population Based Reinforcement Learning. Instal

3 Feb 11, 2022
A repo with study material, exercises, examples, etc for Devnet SPAUTO

MPLS in the SDN Era -- DevNet SPAUTO Get right to the study material: Checkout the Wiki! A lab topology based on MPLS in the SDN era book used for 30

Hugo Tinoco 67 Nov 16, 2022
A state-of-the-art semi-supervised method for image recognition

Mean teachers are better role models Paper ---- NIPS 2017 poster ---- NIPS 2017 spotlight slides ---- Blog post By Antti Tarvainen, Harri Valpola (The

Curious AI 1.4k Jan 06, 2023
Implementing a simplified copy of Shazam application from scratch using MinHashing and LSH.

Building Shazam from scratch In this repository we tried to implement a simplified copy of the Shazam application able to tell you the name of a song

Arturo Ghinassi 0 Nov 17, 2022
Tutel MoE: An Optimized Mixture-of-Experts Implementation

Project Tutel Tutel MoE: An Optimized Mixture-of-Experts Implementation. Supported Framework: Pytorch Supported GPUs: CUDA(fp32 + fp16), ROCm(fp32) Ho

Microsoft 344 Dec 29, 2022
Graph Transformer Architecture. Source code for

Graph Transformer Architecture Source code for the paper "A Generalization of Transformer Networks to Graphs" by Vijay Prakash Dwivedi and Xavier Bres

NTU Graph Deep Learning Lab 561 Jan 08, 2023
This project is the PyTorch implementation of our CVPR 2022 paper:

Requirements and Dependency Install PyTorch with CUDA (for GPU). (Experiments are validated on python 3.8.11 and pytorch 1.7.0) (For visualization if

Lei Huang 23 Nov 29, 2022
TJU Deep Learning & Neural Network

Deep_Learning & Neural_Network_Lab 实验环境 Python 3.9 Anaconda3(官网下载或清华镜像都行) PyTorch 1.10.1(安装代码如下) conda install pytorch torchvision torchaudio cudatool

St3ve Lee 1 Jan 19, 2022
This repository compare a selfie with images from identity documents and response if the selfie match.

aws-rekognition-facecompare This repository compare a selfie with images from identity documents and response if the selfie match. This code was made

1 Jan 27, 2022
Interactive Terraform visualization. State and configuration explorer.

Rover - Terraform Visualizer Rover is a Terraform visualizer. In order to do this, Rover: generates a plan file and parses the configuration in the ro

Tu Nguyen 2.3k Jan 07, 2023
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

CLIP-GEN [简体中文][English] 本项目在萤火二号集群上用 PyTorch 实现了论文 《CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP》。 CLIP-GEN 是一个 Language-F

75 Dec 29, 2022
[CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition (CVPR 2021) arXiv Prerequisite PyTorch = 1.2.0 Python3 torchvision PIL argpar

51 Nov 11, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
A small library of 3D related utilities used in my research.

utils3D A small library of 3D related utilities used in my research. Installation Install via GitHub pip install git+https://github.com/Steve-Tod/util

Zhenyu Jiang 8 May 20, 2022
CAUSE: Causality from AttribUtions on Sequence of Events

CAUSE: Causality from AttribUtions on Sequence of Events

Wei Zhang 21 Dec 01, 2022
This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset.

FACT This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset. To cite, please use:

105 Dec 17, 2022
Zero-shot Learning by Generating Task-specific Adapters

Code for "Zero-shot Learning by Generating Task-specific Adapters" This is the repository containing code for "Zero-shot Learning by Generating Task-s

INK Lab @ USC 11 Dec 17, 2021
Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

clip-text-decoder Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script. Example Predi

Frank Odom 36 Dec 21, 2022