You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Jiayi Weng e8f8cdfa41
fix logger.write error in atari script (#444)
1 week ago
.github bump to v0.4.3 (#432) 2 weeks ago
docs bump to v0.4.3 (#432) 2 weeks ago
examples fix logger.write error in atari script (#444) 1 week ago
test bump to v0.4.3 (#432) 2 weeks ago
tianshou bump to v0.4.3 (#432) 2 weeks ago
.gitignore fix logger.write error in atari script (#444) 1 week ago fix #77 1 year ago
LICENSE update readme 2 years ago Add to include license file in source distribution (#227) 12 months ago
Makefile bump to v0.4.3 (#432) 2 weeks ago Add Weights and Biases Logger (#427) 3 weeks ago
setup.cfg bump to v0.4.3 (#432) 2 weeks ago bump to v0.4.3 (#432) 2 weeks ago

PyPI Conda Read the Docs Read the Docs Unittest codecov GitHub issues GitHub stars GitHub forks GitHub license

Tianshou (天授) is a reinforcement learning platform based on pure PyTorch. Unlike existing reinforcement learning libraries, which are mainly based on TensorFlow, have many nested classes, unfriendly API, or slow-speed, Tianshou provides a fast-speed modularized framework and pythonic API for building the deep reinforcement learning agent with the least number of lines of code. The supported interface algorithms currently include:

Here is Tianshou's other features:

  • Elegant framework, using only ~4000 lines of code
  • State-of-the-art MuJoCo benchmark for REINFORCE/A2C/TRPO/PPO/DDPG/TD3/SAC algorithms
  • Support parallel environment simulation (synchronous or asynchronous) for all algorithms Usage
  • Support recurrent state representation in actor network and critic network (RNN-style training for POMDP) Usage
  • Support any type of environment state/action (e.g. a dict, a self-defined class, ...) Usage
  • Support customized training process Usage
  • Support n-step returns estimation and prioritized experience replay for all Q-learning based algorithms; GAE, nstep and PER are very fast thanks to numba jit function and vectorized numpy operation
  • Support multi-agent RL Usage
  • Comprehensive documentation, PEP8 code-style checking, type checking and unit tests

In Chinese, Tianshou means divinely ordained and is derived to the gift of being born with. Tianshou is a reinforcement learning platform, and the RL algorithm does not learn from humans. So taking "Tianshou" means that there is no teacher to study with, but rather to learn by themselves through constant interaction with the environment.



Tianshou is currently hosted on PyPI and conda-forge. It requires Python >= 3.6.

You can simply install Tianshou from PyPI with the following command:

$ pip install tianshou

If you use Anaconda or Miniconda, you can install Tianshou from conda-forge through the following command:

$ conda install -c conda-forge tianshou

You can also install with the newest version through GitHub:

$ pip install git+ --upgrade

After installation, open your python console and type

import tianshou

If no error occurs, you have successfully installed Tianshou.


The tutorials and API documentation are hosted on

The example scripts are under test/ folder and examples/ folder.


Why Tianshou?

Comprehensive Functionality

RL Platform GitHub Stars # of Alg. (1) Custom Env Batch Training RNN Support Nested Observation Backend
Baselines GitHub stars 9 ✔️ (gym) (2) ✔️ TF1
Stable-Baselines GitHub stars 11 ✔️ (gym) (2) ✔️ TF1
Stable-Baselines3 GitHub stars 7 (3) ✔️ (gym) (2) ✔️ PyTorch
Ray/RLlib GitHub stars 16 ✔️ ✔️ ✔️ ✔️ TF/PyTorch
SpinningUp GitHub stars 6 ✔️ (gym) (2) PyTorch
Dopamine GitHub stars 7 TF/JAX
ACME GitHub stars 14 ✔️ (dm_env) ✔️ ✔️ ✔️ TF/JAX
keras-rl GitHub stars 7 ✔️ (gym) Keras
rlpyt GitHub stars 11 ✔️ ✔️ ✔️ PyTorch
ChainerRL GitHub stars 18 ✔️ (gym) ✔️ ✔️ Chainer
Sample Factory GitHub stars 1 (4) ✔️ (gym) ✔️ ✔️ ✔️ PyTorch
Tianshou GitHub stars 20 ✔️ (gym) ✔️ ✔️ ✔️ PyTorch

(1): access date: 2021-08-08

(2): not all algorithms support this feature

(3): TQC and QR-DQN in sb3-contrib instead of main repo

(4): super fast APPO!

High quality software engineering standard

RL Platform Documentation Code Coverage Type Hints Last Update
Baselines GitHub last commit
Stable-Baselines Documentation Status coverage GitHub last commit
Stable-Baselines3 Documentation Status coverage report ✔️ GitHub last commit
Ray/RLlib (1) ✔️ GitHub last commit
SpinningUp GitHub last commit
Dopamine GitHub last commit
ACME (1) ✔️ GitHub last commit
keras-rl Documentation (1) GitHub last commit
rlpyt Docs codecov GitHub last commit
ChainerRL Documentation Status Coverage Status GitHub last commit
Sample Factory :heavy_minus_sign: codecov GitHub last commit
Tianshou Read the Docs codecov ✔️ GitHub last commit

(1): it has continuous integration but the coverage rate is not available

Reproducible and High Quality Result

Tianshou has its unit tests. Different from other platforms, the unit tests include the full agent training procedure for all of the implemented algorithms. It would be failed once if it could not train an agent to perform well enough on limited epochs on toy scenarios. The unit tests secure the reproducibility of our platform. Check out the GitHub Actions page for more detail.

The Atari/Mujoco benchmark results are under examples/atari/ and examples/mujoco/ folders. Our Mujoco result can beat most of existing benchmark.

Modularized Policy

We decouple all of the algorithms roughly into the following parts:

  • __init__: initialize the policy;
  • forward: to compute actions over given observations;
  • process_fn: to preprocess data from replay buffer (since we have reformulated all algorithms to replay-buffer based algorithms);
  • learn: to learn from a given batch data;
  • post_process_fn: to update the replay buffer from the learning process (e.g., prioritized replay buffer needs to update the weight);
  • update: the main interface for training, i.e., process_fn -> learn -> post_process_fn.

Within this API, we can interact with different policies conveniently.

Quick Start

This is an example of Deep Q Network. You can also run the full script at test/discrete/

First, import some relevant packages:

import gym, torch, numpy as np, torch.nn as nn
from torch.utils.tensorboard import SummaryWriter
import tianshou as ts

Define some hyper-parameters:

task = 'CartPole-v0'
lr, epoch, batch_size = 1e-3, 10, 64
train_num, test_num = 10, 100
gamma, n_step, target_freq = 0.9, 3, 320
buffer_size = 20000
eps_train, eps_test = 0.1, 0.05
step_per_epoch, step_per_collect = 10000, 10
writer = SummaryWriter('log/dqn')  # tensorboard is also supported!
logger = ts.utils.TensorboardLogger(writer)

Make environments:

# you can also try with SubprocVectorEnv
train_envs = ts.env.DummyVectorEnv([lambda: gym.make(task) for _ in range(train_num)])
test_envs = ts.env.DummyVectorEnv([lambda: gym.make(task) for _ in range(test_num)])

Define the network:

from import Net
# you can define other net by following the API:
env = gym.make(task)
state_shape = env.observation_space.shape or env.observation_space.n
action_shape = env.action_space.shape or env.action_space.n
net = Net(state_shape=state_shape, action_shape=action_shape, hidden_sizes=[128, 128, 128])
optim = torch.optim.Adam(net.parameters(), lr=lr)

Setup policy and collectors:

policy = ts.policy.DQNPolicy(net, optim, gamma, n_step, target_update_freq=target_freq)
train_collector =, train_envs,, train_num), exploration_noise=True)
test_collector =, test_envs, exploration_noise=True)  # because DQN uses epsilon-greedy method

Let's train it:

result = ts.trainer.offpolicy_trainer(
    policy, train_collector, test_collector, epoch, step_per_epoch, step_per_collect,
    test_num, batch_size, update_per_step=1 / step_per_collect,
    train_fn=lambda epoch, env_step: policy.set_eps(eps_train),
    test_fn=lambda epoch, env_step: policy.set_eps(eps_test),
    stop_fn=lambda mean_rewards: mean_rewards >= env.spec.reward_threshold,
print(f'Finished training! Use {result["duration"]}')

Save / load the trained policy (it's exactly the same as PyTorch nn.module):, 'dqn.pth')

Watch the performance with 35 FPS:

collector =, env, exploration_noise=True)
collector.collect(n_episode=1, render=1 / 35)

Look at the result saved in tensorboard: (with bash script in your terminal)

$ tensorboard --logdir log/dqn

You can check out the documentation for advanced usage.

It's worth a try: here is a test on a laptop (i7-8750H + GTX1060). It only uses 3 seconds for training an agent based on vanilla policy gradient on the CartPole-v0 task: (seed may be different across different platform and device)

$ python3 test/discrete/ --seed 0 --render 0.03


Tianshou is still under development. More algorithms and features are going to be added and we always welcome contributions to help make Tianshou better. If you would like to contribute, please check out this link.

Citing Tianshou

If you find Tianshou useful, please cite it in your publications.

  title={Tianshou: a Highly Modularized Deep Reinforcement Learning Library},
  author={Weng, Jiayi and Chen, Huayu and Yan, Dong and You, Kaichao and Duburcq, Alexis and Zhang, Minghao and Su, Hang and Zhu, Jun},
  journal={arXiv preprint arXiv:2107.14171},


Tianshou was previously a reinforcement learning platform based on TensorFlow. You can check out the branch priv for more detail. Many thanks to Haosheng Zou's pioneering work for Tianshou before version 0.1.1.

We would like to thank TSAIL and Institute for Artificial Intelligence, Tsinghua University for providing such an excellent AI research platform.