Isaac Lab Task
Ok. In this tutorial, I will provide more detail about build your own task. First of all, let’s see what are in a task directory.
[your task]/
:config/
# Agent directorymdp/
# MDP setting__init__.py
[your task]_env_cfg.py
# Environment File
Now, let me explain each of them in detail except init.py which is just a empty file.
Environment File
Basically, this is the environment configuration file, and we only consider the manager-based environment in this tutorial. The following are two tutorial about env and RL env from Nvidia Team. You do not have to go through them, because they did not explain them based on the template. Thus, I will try to explain in my way.
To define a RL environment configuration file, you must have following thing:
Create a class named [your task]EnvCfg inherit class ManagerBasedRLEnvCfg.
First, we need to initialize the following attributes:
scene
: Scene settings.observations
: Observation space settings.actions
: Action space settings.events
: Event settings.is_finite_horizon
: Whether the task is treated as a finite or infinite horizon problem for the agent, default is False. Initialized it in__post_init__
.episode_length_s
: Duration of an epsode (in seconds). Initialized in__post_init__
.decimation
: Number of control action updates at sim dt per policy dt. Initialized in__post_init__
.**rewards
: Reward settings.terminations
: Termination settings.curriculum
: Curriculum settings.commands
: Commmand Settings.
To initialize them, it requires us to create the classes for them. For more detail, please check the tutorial in Manager-Based Environment and Manager-Based RL Environment.
Agent Dirctory
The agent directory is where you build the task configurations for different robots for your environment. That is, for single one task, you can have different robot to learn it. Thus, you need to specify how each robot to handle the task. For each robot, you need to have following modules:
[your robot]/
:agents/
# set the parameter for each RL library.__init__.py
# used to register the task.[your robot]_[your task]_env_cfg.py
# you can use whatever name you want here.
Now, I will give more detail of each modules.
agents/
Here is the directory where you set the model parameter for each RL library. You can check the example from here.
[your robot]_[your task]_env_cfg.py
This file defines how the robot should be set for the task. For example, set the scene robot, set the random range, and some other parameters of the task. Furthermore, the configuration of play should be set here as well. Therefore, the following two classes should be here:
- [your robot][your task]EnvCfg: [your task]EnvCfg
- [your robot][your task]EnvCfg_PLAY: [your robot][your task]EnvCfg
__init__.py
Here is where we register the task. More detail is provided.
Here is the template of register file.
import gymnasium as gym
from . import agents, [your robot]_[your task]_env_cfg
gym.register(
id="Isaac-[your task]-[your robot]-v[version number]",
entry_point="omni.isaac.lab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": [your robot]_[your task]_env_cfg.[your robot][your task]EnvCfg,
"[RL library]_cfg_entry_point": [link to RL library configuration file],
...
},
)
gym.register(
id="Isaac-[your task]-[your robot]-Play-v[version number]",
entry_point="omni.isaac.lab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": [your robot]_[your task]_env_cfg.[your robot][your task]EnvCfg_PLAY,
"[RL library]_cfg_entry_point": [link to RL library configuration file],
...
},
)