A Real-World Benchmark for Reinforcement Learning based Recommender System

Related tags

Deep LearningRL4RS
Overview

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System

License

License

RL4RS is a real-world deep reinforcement learning recommender system benchmark for practitioners and researchers.

import gym
from rl4rs.env.slate import SlateRecEnv, SlateState

sim = SlateRecEnv(config, state_cls=SlateState)
env = gym.make('SlateRecEnv-v0', recsim=sim)
for i in range(epoch):
    obs = env.reset()
    for j in range(config["max_steps"]):
        action = env.offline_action
        next_obs, reward, done, info = env.step(action)
        if done[0]:
            break

Dataset Download: https://drive.google.com/file/d/1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v/view?usp=sharing

Paper: https://arxiv.org/pdf/2110.11073.pdf

Kaggle Competition (old version): https://www.kaggle.com/c/bigdata2021-rl-recsys/overview

Resource Page: https://fuxi-up-research.gitbook.io/fuxi-up-challenges/

key features

⭐ Real-World Datasets

  • two real-world datasets: Besides the artificial datasets or semi-simulated datasets, RL4RS collects the raw logged data from one of the most popular games released by NetEase Game, which is naturally a sequential decision-making problem.
  • data understanding tool: RL4RS provides a data understanding tool for testing the proper use of RL on recommendation system datasets.
  • advanced dataset setting: RL4RS provides the separated data before and after reinforcement learning deployment for each dataset, which can simulate the difficulties to train a good RL policy from the dataset collected by SL-based algorithm.

⚑ Practical RL Baselines

  • model-free RL: RL4RS supports state-of-the-art RL libraries, such as RLlib and Tianshou. We provide the example codes of state-of-the-art model-free algorithms (A2C, PPO, etc.) implemented by RLlib library on both discrete and continue (combining policy gradients with a K-NN search) RL4RS environment.
  • offline RL: RL4RS implements offline RL algorithms including BC, BCQ and CQL through d3rlpy library. RL4RS is also the first to report the effectiveness of offline RL algorithms (BCQ and CQL) in RL-based RS domain.
  • RL-based RS baselines: RL4RS implements some algorithms proposed in the RL-based RS domain, including Exact-k and Adversarial User Model.
  • offline RL evaluation: In addition to the reward indicator and traditional RL evaluation setting (train and test on the same environment), RL4RS try to provide a complete evaluation framework by placing more emphasis on counterfactual policy evaluation.

πŸ”° Easy-To-Use scaleable API

  • low coupling structure: RL4RS specifies a fixed data format to reduce code coupling. And the data-related logics are unified into data preprocessing scripts or user-defined state classes.
  • file-based RL environment: RL4RS implements a file-based gym environment, which enables random sampling and sequential access to datasets exceeding memory size. It is easy to extend it to distributed file systems.
  • http-based vector Env: RL4RS naturally supports Vector Env, that is, the environment processes batch data at one time. We further encapsulate the env through the HTTP interface, so that it can be deployed on multiple servers to accelerate the generation of samples.

experimental features (welcome contributions!)

  • A new dataset for bundle recommendation with variable discounts, flexible recommendation trigger, and modifiable item content is in prepare.
  • Take raw feature rather than hidden layer embedding as observation input for offline RL
  • Model-based RL Algorithms
  • Reward-oriented simulation environment construction
  • reproduce more algorithms (RL models, safe exploration techniques, etc.) proposed in RL-based RS domain
  • Support Parametric-Action DQN, in which we input concatenated state-action pairs and output the Q-value for each pair.

installation

RL4RS supports Linux, at least 64 GB Mem !!

Github (recommended)

$ git clone https://github.com/fuxiAIlab/RL4RS
$ export PYTHONPATH=$PYTHONPATH:`pwd`/rl4rs
$ conda env create -f environment.yml
$ conda activate rl4rs

Dataset Download (Google Driver)

Dataset Download: https://drive.google.com/file/d/1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v/view?usp=sharing

.
|-- batchrl
|   |-- BCQ_SeqSlateRecEnv-v0_b_all.h5
|   |-- BCQ_SlateRecEnv-v0_a_all.h5
|   |-- BC_SeqSlateRecEnv-v0_b_all.h5
|   |-- BC_SlateRecEnv-v0_a_all.h5
|   |-- CQL_SeqSlateRecEnv-v0_b_all.h5
|   `-- CQL_SlateRecEnv-v0_a_all.h5
|-- data_understanding_tool
|   |-- dataset
|   |   |-- ml-25m.zip
|   |   `-- yoochoose-clicks.dat.zip
|   `-- finetuned
|       |-- movielens.csv
|       |-- movielens.h5
|       |-- recsys15.csv
|       |-- recsys15.h5
|       |-- rl4rs.csv
|       `-- rl4rs.h5
|-- exactk
|   |-- exact_k.ckpt.10000.data-00000-of-00001
|   |-- exact_k.ckpt.10000.index
|   `-- exact_k.ckpt.10000.meta
|-- ope
|   `-- logged_policy.h5
|-- raw_data
|   |-- item_info.csv
|   |-- rl4rs_dataset_a_rl.csv
|   |-- rl4rs_dataset_a_sl.csv
|   |-- rl4rs_dataset_b_rl.csv
|   `-- rl4rs_dataset_b_sl.csv
`-- simulator
    |-- finetuned
    |   |-- simulator_a_dien
    |   |   |-- checkpoint
    |   |   |-- model.data-00000-of-00001
    |   |   |-- model.index
    |   |   `-- model.meta
    |   `-- simulator_b2_dien
    |       |-- checkpoint
    |       |-- model.data-00000-of-00001
    |       |-- model.index
    |       `-- model.meta
    |-- rl4rs_dataset_a_shuf.csv
    `-- rl4rs_dataset_b3_shuf.csv

two ways to use this resource

Reinforcement Learning Only

# move simulator/*.csv to rl4rs/dataset
# move simulator/finetuned/* to rl4rs/output
cd reproductions/
# run exact-k
bash run_exact_k.sh
# start http-based Env, then run RLlib library
nohup python -u rl4rs/server/gymHttpServer.py &
bash run_modelfree_rl.sh DQN/PPO/DDPG/PG/PG_conti/etc.

start from scratch (batch-rl, environment simulation, etc.)

cd reproductions/
# first step, generate tfrecords for supervised learning (environment simulation) 
# is time-consuming, you can annotate them firstly.
bash run_split.sh

# environment simulation part (need tfrecord)
# run these scripts to compare different SL methods
bash run_supervised_item.sh dnn/widedeep/dien/lstm
bash run_supervised_slate.sh dnn_slate/adversarial_slate/etc.
# or you can directly train DIEN-based simulator as RL Env.
bash run_simulator_train.sh dien

# model-free part (need run_simulator_train.sh)
# run exact-k
bash run_exact_k.sh
# start http-based Env, then run RLlib library
nohup python -u rl4rs/server/gymHttpServer.py &
bash run_modelfree_rl.sh DQN/PPO/DDPG/PG/PG_conti/etc.

# offline RL part (need run_simulator_train.sh)
# generate offline dataset for offline RL first (dataset_generate stage)
# generate offline dataset for offline RL first (train stage)
bash run_batch_rl.sh BC/BCQ/CQL

reported baselines

algorithm category support mode
Wide&Deep supervised learning item-wise classification/slate-wise classification/item ranking
GRU4Rec supervised learning item-wise classification/slate-wise classification/item ranking
DIEN supervised learning item-wise classification/slate-wise classification/item ranking
Adversarial User Model supervised learning item-wise classification/slate-wise classification/item ranking
Exact-K model-free learning discrete env & hidden state as observation
Policy Gredient (PG) model-free RL model-free learning
Deep Q-Network (DQN) model-free RL discrete env & raw feature/hidden state as observation
Deep Deterministic Policy Gradients (DDPG) model-free RL conti env & raw feature/hidden state as observation
Asynchronous Actor-Critic (A2C) model-free RL discrete/conti env & raw feature/hidden state as observation
Proximal Policy Optimization (PPO) model-free RL discrete/conti env & raw feature/hidden state as observation
Behavior Cloning supervised learning/Offline RL discrete env & hidden state as observation
Batch Constrained Q-learning (BCQ) Offline RL discrete env & hidden state as observation
Conservative Q-Learning (CQL) Offline RL discrete env & hidden state as observation

supported algorithms (from RLlib and d3rlpy)

algorithm discrete control continuous control offline RL?
Behavior Cloning (supervised learning) βœ… βœ…
Deep Q-Network (DQN) βœ… β›”
Double DQN βœ… β›”
Rainbow βœ… β›”
PPO βœ… βœ…
A2C A3C βœ… βœ…
IMPALA βœ… βœ…
Deep Deterministic Policy Gradients (DDPG) β›” βœ…
Twin Delayed Deep Deterministic Policy Gradients (TD3) β›” βœ…
Soft Actor-Critic (SAC) βœ… βœ…
Batch Constrained Q-learning (BCQ) βœ… βœ… βœ…
Bootstrapping Error Accumulation Reduction (BEAR) β›” βœ… βœ…
Advantage-Weighted Regression (AWR) βœ… βœ… βœ…
Conservative Q-Learning (CQL) βœ… βœ… βœ…
Advantage Weighted Actor-Critic (AWAC) β›” βœ… βœ…
Critic Reguralized Regression (CRR) β›” βœ… βœ…
Policy in Latent Action Space (PLAS) β›” βœ… βœ…
TD3+BC β›” βœ… βœ…

examples

See script/ and reproductions/.

RLlib examples: https://docs.ray.io/en/latest/rllib-examples.html

d3rlpy examples: https://d3rlpy.readthedocs.io/en/v1.0.0/

reproductions

See reproductions/.

bash run_xx.sh ${param}
experiment in the paper shell script optional param. description
Sec.3 run_split.sh - dataset split/shuffle/align(for datasetB)/to tfrecord
Sec.4 run_mdp_checker.sh recsys15/movielens/rl4rs unzip ml-25m.zip and yoochoose-clicks.dat.zip into dataset/
Sec.5.1 run_supervised_item.sh dnn/widedeep/lstm/dien Table 5. Item-wise classification
Sec.5.1 run_supervised_slate.sh dnn_slate/widedeep_slate/lstm_slate/dien_slate/adversarial_slate Table 5. Item-wise rank
Sec.5.1 run_supervised_slate.sh dnn_slate_multiclass/widedeep_slate_multiclass/lstm_slate_multiclass/dien_slate_multiclass Table 5. Slate-wise classification
Sec.5.1 & Sec.6 run_simulator_train.sh dien dien-based simulator for different trainsets
Sec.5.1 & Sec.6 run_simulator_eval.sh dien Table 6.
Sec.5.1 & Sec.6 run_modelfree_rl.sh PG/DQN/A2C/PPO/IMPALA/DDPG/*_conti Table 7.
Sec.5.2 & Sec.6 run_batch_rl.sh BC/BCQ/CQL Table 8.
Sec.5.1 run_exact_k.sh - Exact-k
- run_simulator_env_test.sh - examining the consistency of features (observations) between RL env and supervised simulator

contributions

Any kind of contribution to RL4RS would be highly appreciated! Please contact us by email.

community

Channel Link
Materials Google Drive
Email Mail
Issues GitHub Issues
Fuxi Team Fuxi HomePage
Our Team Open-project

citation

@article{2021RL4RS,
title={RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System},
author={ Kai Wang and Zhene Zou and Yue Shang and Qilin Deng and Minghao Zhao and Runze Wu and Xudong Shen and Tangjie Lyu and Changjie Fan},
journal={ArXiv},
year={2021},
volume={abs/2110.11073}
}
You might also like...
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

RoboDesk A Multi-Task Reinforcement Learning Benchmark
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

The Unsupervised Reinforcement Learning Benchmark (URLB)

The Unsupervised Reinforcement Learning Benchmark (URLB) URLB provides a set of leading algorithms for unsupervised reinforcement learning where agent

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Product-based-recommendation-system - A product based recommendation system which uses Machine learning algorithm such as KNN and cosine similarity
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

Comments
  • No Appendix in origin paper

    No Appendix in origin paper

    Thanks for this repo! I find the section 4.2 of the paper says that we can know more about data details in Appendix C, and the section 5.1 says that more details about the environment simulation model are shown in Appendix D. However, I can't find any appendix in the paper from this url shown in the repo. Maybe forget to add appendix to the paper? Or where can I find all the appendix? ~~ Thanks again!

    opened by Zessay 1
  • ConnectionResetError(104, 'Connection reset by peer'))

    ConnectionResetError(104, 'Connection reset by peer'))

    I'm sorry that there is a program error report and I would like to ask you for advice. When running bash run_modelfree_rl.sh DQN, a connection error occurs. The error message is as follows:

    2022-11-15 08:19:12,029 INFO replay_buffer.py:46 -- Estimated max memory usage for replay buffer is 0.4361 GB (100000.0 batches of size 1, 4361 bytes each), available system memory is 201.44095232 GB 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_1/kernel:0' shape=(256, 64) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_1/bias:0' shape=(64,) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_out/kernel:0' shape=(64, 284) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_out/bias:0' shape=(284,) dtype=float32> 2022-11-15 08:19:14,846 INFO multi_gpu_impl.py:143 -- Training on concatenated sample batches:

    { 'inputs': [ np.ndarray((576, 540), dtype=float32, min=-1.0, max=37.179, mean=-0.169), np.ndarray((576, 540), dtype=float32, min=-1.0, max=38.907, mean=-0.207), np.ndarray((576,), dtype=int64, min=1.0, max=283.0, mean=103.844), np.ndarray((576,), dtype=float32, min=0.0, max=162.121, mean=7.551), np.ndarray((576,), dtype=bool, min=0.0, max=1.0, mean=0.135), np.ndarray((576,), dtype=float64, min=1.0, max=1.0, mean=1.0)], 'placeholders': [ <tf.Tensor 'default_policy/obs:0' shape=(?, 540) dtype=float32>, <tf.Tensor 'default_policy/new_obs:0' shape=(?, 540) dtype=float32>, <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>, <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>, <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=float32>, <tf.Tensor 'default_policy/weights:0' shape=(?,) dtype=float32>], 'state_inputs': []}

    2022-11-15 08:19:14,846 INFO multi_gpu_impl.py:188 -- Divided 576 rollout sequences, each of length 1, among 1 devices. Traceback (most recent call last): File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 438, in _error_catcher yield File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 519, in read data = self._fp.read(amt) if not fp_closed else b"" File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/http/client.py", line 463, in read n = self.readinto(b) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/http/client.py", line 507, in readinto n = self.fp.readinto(b) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/socket.py", line 586, in readinto return self._sock.recv_into(b) ConnectionResetError: [Errno 104] Connection reset by peer

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 760, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 576, in stream data = self.read(amt=amt, decode_content=decode_content) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 541, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/contextlib.py", line 99, in exit self.gen.throw(type, value, traceback) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 455, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "modelfree_train.py", line 429, in result = trainer.train() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 643, in train raise e File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 629, in train result = Trainable.train(self) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/tune/trainable.py", line 237, in train result = self.step() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 170, in step res = next(self.train_exec_impl) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 756, in next return next(self.built_iterator) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 1075, in build_union item = next(it) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 756, in next return next(self.built_iterator) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/execution/rollout_ops.py", line 75, in sampler yield workers.local_worker().sample() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 739, in sample batches = [self.input_reader.next()] File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 101, in next batches = [self.get_data()] File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 231, in get_data item = next(self.rollout_provider) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 615, in _env_runner sample_collector=sample_collector, File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 934, in _process_observations env_id) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/env/base_env.py", line 368, in try_reset return {_DUMMY_AGENT_ID: self.vector_env.reset_at(env_id)} File "/home/wlxy/userfolder/RL4RS/rl4rs/utils/rllib_vector_env.py", line 44, in reset_at self.reset_cache = self.env.reset() File "/home/wlxy/userfolder/RL4RS/rl4rs/server/httpEnv.py", line 43, in reset observation = self.client.env_reset(self.instance_id) File "/home/wlxy/userfolder/RL4RS/rl4rs/server/gymHttpClient.py", line 67, in env_reset resp = self._post_request(route, None) File "/home/wlxy/userfolder/RL4RS/rl4rs/server/gymHttpClient.py", line 43, in _post_request data=json.dumps(data)) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 687, in send r.content File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 838, in content self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 763, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    I would like to ask for your help, thank you very much.

    opened by hubin111 3
  • Problems about TensorFlow version and killed error

    Problems about TensorFlow version and killed error

    I reproduced run_batch_rl according to the guidelines but the errors are as follows.

    `WARNING:tensorflow:From /root/miniconda3/envs/rl4rs/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn_cell_impl.py:575: calling Zeros.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /root/miniconda3/envs/rl4rs/lib/python3.6/site-packages/deepctr/contrib/rnn.py:257: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/nets/dien.py:43: The name tf.keras.backend.get_session is deprecated. Please use tf.compat.v1.keras.backend.get_session instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/nets/dien.py:43: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:124: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:125: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:129: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

    /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/slate.py:279: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray complete_states = np.array(samples.get_complete_states()) run_batch_rl.sh: line 82: 180 Killed python -u batchrl_train.py $algo 'dataset_generate' "{'env':'SlateRecEnv-v0','iteminfo_file':'${rl4rs_dataset_dir}/item_info.csv','sample_file':'${rl4rs_dataset_dir}/rl4rs_dataset_a_shuf.csv','model_file':'${rl4rs_output_dir}/simulator_a_dien/model','trial_name':'a_all'}"`

    First it seems to be some warnings with the TensorFlow version, my own version is 1.15.0, I checked the environment file that what it need is also 1.15.0. I tried other versions such as 1.14.0 and 2.0.0 but still failed. However actually they are just warnings but not errors, so I don't know if I do have to use another version. And another problem is that finally it reported killed and aborted.

    opened by Heth0531 2
Releases(v1.1.0)
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, BjΓΆrn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
Neural network pruning for finding a sparse computational model for controlling a biological motor task.

MothPruning Scientific Overview Originally inspired by biological nervous systems, deep neural networks (DNNs) are powerful computational tools for mo

Olivia Thomas 0 Dec 14, 2022
[NeurIPS'21] Projected GANs Converge Faster

[Project] [PDF] [Supplementary] [Talk] This repository contains the code for our NeurIPS 2021 paper "Projected GANs Converge Faster" by Axel Sauer, Ka

798 Jan 04, 2023
MQBench Quantization Aware Training with PyTorch

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
[CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision

TorchSemiSeg [CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision by Xiaokang Chen1, Yuhui Yuan2, Gang Zeng1, Jingdong Wang

Chen XiaoKang 387 Jan 08, 2023
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

Annoy Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given quer

Spotify 10.6k Jan 04, 2023
Official project repository for 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination'

NCAE_UAD Official project repository of 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination' Abstract In this p

Jongmin Andrew Yu 2 Feb 10, 2022
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
Fast and simple implementation of RL algorithms, designed to run fully on GPU.

RSL RL Fast and simple implementation of RL algorithms, designed to run fully on GPU. This code is an evolution of rl-pytorch provided with NVIDIA's I

Robotic Systems Lab - Legged Robotics at ETH ZΓΌrich 68 Dec 29, 2022
PyBrain - Another Python Machine Learning Library.

PyBrain -- the Python Machine Learning Library =============================================== INSTALLATION ------------ Quick answer: make sure you

2.8k Dec 31, 2022
Rafael Project- Classifying rockets to different types using data science algorithms.

Rocket-Classify Rafael Project- Classifying rockets to different types using data science algorithms. In this project we received data base with data

Hadassah Engel 5 Sep 18, 2021
Convnext-tf - Unofficial tensorflow keras implementation of ConvNeXt

ConvNeXt Tensorflow This is unofficial tensorflow keras implementation of ConvNe

29 Oct 06, 2022
Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Dongkyu Lee 4 Sep 18, 2022
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

213 Jan 02, 2023
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 Oct 11, 2021
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 07, 2023
πŸ•ΊFull body detection and tracking

Pose-Detection πŸ€” Overview Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign

Abbas Ataei 20 Nov 21, 2022
Learning to Reach Goals via Iterated Supervised Learning

Vanilla GCSL This repository contains a vanilla implementation of "Learning to Reach Goals via Iterated Supervised Learning" proposed by Dibya Gosh et

Christoph Heindl 4 Aug 10, 2022
TensorFlow implementation of ENet

TensorFlow-ENet TensorFlow implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. This model was tested on th

Kwotsin 255 Oct 17, 2022
Statsmodels: statistical modeling and econometrics in Python

About statsmodels statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics an

statsmodels 8.1k Jan 02, 2023