agit developer 6df3bdd57a | 4 years ago | |
---|---|---|
... | ||
README.md | 4 years ago | |
ray_rllib_realworld_env.py | 4 years ago | |
realworld_env.py | 4 years ago |
Agit also prepacks a set of environments that can provide an evaluation of an RL algorithm’s potential applicability to real-world systems.
Currently the suite provides following environments:
| domain_name | task_name |
|-------------|-----------------------|
| cartpole | realworld_balance |
| cartpole | realworld_swingup |
| humanoid | realworld_stand |
| humanoid | realworld_walk |
| manipulator | realworld_bring_ball |
| manipulator | realworld_bring_peg |
| manipulator | realworld_insert_ball |
| manipulator | realworld_insert_peg |
| quadruped | realworld_walk |
| quadruped | realworld_run |
| walker | realworld_stand |
| walker | realworld_walk |
As mentioned above, you could access these envs as the example.
from agit import rwrl2gym
env = rwrl2gym.make('humanoid', 'realworld_walk',
combined_challenge='easy',
environment_kwargs={'flat_observation': True})
For specifications explanation:
https://github.com/google-research/realworldrl_suite#challenges
def make(
domain_name,
task_name,
combined_challenge=None,
safety_spec=None,
delay_spec=None,
noise_spec=None,
perturb_spec=None,
dimensionality_spec=None,
multiobj_spec=None,
environment_kwargs=None
):
RWRL/realworld_env.py is a simple example of realworld env.
RWRL/ray_rllib_realworld_env.py is a simple example of gym env via Ray framework.