This tutorial will give a toy example on how to use PS4 controller to collect demonstrations in the PyBullet simulation within the Ark framework,
In this section, we'll demonstrate how to load both a URDF model and a primitive shape into the PyBullet simulation. You can define your own objects using a YAML configuration file and place it under the config/objects
directory.
To load a simple primitive shape like a cube, you can define its properties in a YAML file. Below is an example for a cube:
name: "Cube"
config:
source: "primitive"
publish_ground_truth: False
visual:
shape_type: "GEOM_BOX"
visual_shape:
halfExtents:
- 0.03
- 0.015
- 0.03
rgbaColor:
- 1.0
- 0.0
- 0.5
- 1.0
collision:
shape_type: "GEOM_BOX"
collision_shape:
halfExtents:
- 0.03
- 0.015
- 0.03
multi_body:
baseMass: 2
base_position:
- 0.0
- -0.2
- 0.7
base_orientation:
- 0.0
- 0.0
- 0.0
- 1.0
You can also load customized objects, such as a URDF-based object. Below is an example for a drill:
name: "Drill"
config:
source: "urdf"
urdf_path: "config/objects/power_drill/model.urdf"
publish_ground_truth: False
base_position:
- -0.5
- 0.5
- 0.3
base_orientation: # x, y, z, w
- 0.0
- 0.0
- 0.0
- 1.0
global_scaling: 1.5
Once you've defined the objects in the YAML files, you'll need to add them to the global configuration file so that they are loaded into the environment.
In the config/global_config.yaml
file, add the paths to your object YAML files under the objects
section:
objects:
- "objects/drill.yaml"
- "objects/cube.yaml"
To load a robot into the simulation, you first need to create a YAML configuration file for your robot in the config/robots
directory. Below is an example of how to define a robot configuration for the Franka:
name: "Franka"
config:
source: "urdf"
urdf_path: "panda_with_gripper.urdf"
class_dir: "../ark_robots/ark_franka/franka_panda"
frequency: 10 # The default frequency is 240Hz
merge_fixed_links: False
ee_index: 11 # End-effector index
base_position:
- -0.55
- 0.0
- 0.6
base_orientation: # Orientation as [x, y, z, w]
- 0
- 0
- 0
- 1
use_fixed_base: True
initial_configuration:
- 0
- -0.785
- 0
- -2.356
- 0
- 1.571
- 0.785
- 0
- 0
- 0
- 0
- 0
joint_groups:
arm:
control_mode: "position"
joints:
- "panda_joint1"
- "panda_joint2"
- "panda_joint3"
- "panda_joint4"
- "panda_joint5"
- "panda_joint6"
- "panda_joint7"
gripper:
control_mode: "position"
joints:
- "panda_finger_joint1"
all:
control_mode: "position"
joints:
- "panda_joint1"
- "panda_joint2"
- "panda_joint3"
- "panda_joint4"
- "panda_joint5"
- "panda_joint6"
- "panda_joint7"
- "panda_finger_joint1"
Once the robot YAML file has been created, you need to add its path to the robots
section in the global configuration file config/global_config.yaml
. Here’s how you can do it:
robots:
- "robots/franka.yaml"
In this section, we demonstrate how to load two types of cameras into the simulation: a fixed-position camera and a camera attached to a joint. You can choose to use one or both types of cameras. The camera configuration files are defined in YAML format and placed under the config/sensors
directory.
For a camera with a fixed position, you should specify the camera_type
as fixed
and add a fix
section under sim_config
in the YAML file. Below is an example configuration for a fixed camera:
name: "camera
config:
class_dir: "../ark_realsense/intel_realsense" # Directory where the class is located
type: "Camera"
camera_type: "fixed"
frequency: 30 # Default is 30Hz
width: 256 # Default is 640
height: 256 # Default is 480
streams:
depth:
enable: True
format: "z16"
color:
enable: True
format: "bgr8"
infrared:
enable: True
format: "y8"
sim_config:
visualize: True
fov: 60 # Default is 60
near_val: 0.1 # Default is 0.1
far_val: 100.0 # Default is 100.0
fix:
camera_target_position: [0.15, 0.0, 0.5]
distance: 0.9
yaw: -90.0
pitch: 215.0
roll: 0.0
up_axis_index: 2
To define a camera attached to a robot joint, change the camera_type
to attached
and replace fix
with attach
in the YAML configuration. You will also need to specify the robot's name and the link where the camera is attached. Below is an example configuration for an attached camera:
yaml
Copy
name: "gripper_camera"
config:
class_dir: "../ark_realsense/intel_realsense" # Directory where the class is located
type: "Camera"
camera_type: "attached"
frequency: 30 # Default is 30Hz
width: 256 # Default is 640
height: 256 # Default is 480
streams:
depth:
enable: True
format: "z16"
color:
enable: True
format: "bgr8"
infrared:
enable: True
format: "y8"
sim_config:
visualize: True
fov: 60 # Default is 60
near_val: 0.1 # Default is 0.1
far_val: 100.0 # Default is 100.0
attach:
parent_name: "Franka"
parent_link: "camera_link"
offset_translation: [0.0, 0.0, 0.035] # Default is [0, 0, 0]
offset_rotation: [0.0, 0.0, 0.0] # Euler or quaternions, default is [0, 0, 0]
rel_camera_target: [0, 1, 0]
After creating the camera YAML files, you need to add them to the sensors
section of the global configuration file (config/global_config.yaml
). Here's how you can include the camera files:
sensors:
- "sensors/attached_camera.yaml"
- "sensors/fixed_camera.yaml"
In this section, we show how we map the inputs from the PS4 controller to control the end effector's velocity in the x, y, and z directions, as well as its rotation (roll, pitch, yaw), and the position of the gripper. The mapping of the controller buttons is as follows:
Below is an example of how you can implement this mapping in the get_action
function in nodes/ps4policy.py
:
def get_action(self, q):
"""
Customize the action based on the PS4 controller input for optimal performance.
The corresponding buttons are defined in the ps4controller.py file.
"""
# Linear velocity
v_x = -self.axis_states['D-Pad Vertical']
v_y = -self.axis_states['D-Pad Horizontal']
if self.button_states['L1']:
v_z = 1.0 # Positive z direction (up)
elif self.button_states['R1']:
v_z = -1.0 # Negative z direction (down)
else:
v_z = 0.0 # No movement in z direction
# Angular velocity (rotation)
v_roll = -(self.axis_states['Right Stick - Vertical'] - 128) / 128
if np.abs(v_roll) < 0.03:
v_roll = 0 # Deadzone for small movements
v_pitch = -(self.axis_states['Right Stick - Horizontal'] - 128) / 128
if np.abs(v_pitch) < 0.03:
v_pitch = 0 # Deadzone for small movements
v_yaw = -(self.axis_states['Left Stick - Horizontal'] - 128) / 128
if np.abs(v_yaw) < 0.03:
v_yaw = 0 # Deadzone for small movements
# Gripper position
q_gripper = self.axis_states['R2 Trigger'] / 255 # Normalize the gripper position (0 to 1)
To further enhance and personalize the PS4 controller mapping, you can customize the controller settings based on your usage habits. The nodes/ps4controller.py
file defines the available buttons and axis states for the controller, and these mappings can be adjusted to suit specific needs or preferences. Here is the mapping of the buttons and axis states that are available in nodes/ps4controller.py
:
self.button_states = {
'Cross (X)': False,
'Circle (O)': False,
'Triangle': False,
'Square': False,
'L1': False,
'R1': False,
'Share': False,
'Options': False,
'L3 (Left Stick Click)': False,
'R3 (Right Stick Click)': False,
'PS Button': False,
'Touchpad Click': False
}
self.axis_states = {
'Left Stick - Horizontal': 0,
'Left Stick - Vertical': 0,
'Right Stick - Horizontal': 0,
'Right Stick - Vertical': 0,
'L2 Trigger': 0,
'R2 Trigger': 0,
'D-Pad Horizontal': 0,
'D-Pad Vertical': 0
}
After loading the objects, cameras, and robot, and setting up the PS4 controller, the next step is to configure the Gym environment in scripts/franka_env.py
.
In the Gym environment, we need to specify the topics that we want to publish and subscribe to. These topics allow communication with other components of the simulation, such as the robot’s joint states, camera images, and gripper states.
The following example demonstrates how to define the action and observation spaces for the environment, specifying the topics that the environment will publish and subscribe to:
def __init__(self, config=None, sim=True):
environment_name = "Franka_Enviroment"
action_space = {
"Franka/joint_group_command/sim": joint_group_command_t,
}
observation_space = {
'Franka/joint_states/sim': joint_state_t, # Assuming joint states are also packed in a similar way
'camera/rgbd/sim': rgbd_t,
'gripper_camera/rgbd/sim': rgbd_t
}
The action_packing
function takes the action and converts it into a format suitable for communication using LCM. In this example, the joint group command is packed to send to the robot. The observation_unpacking
function extracts and converts the observation data from the LCM message format, unpacking the robot's joint states and RGB-D data from both the main camera and the gripper camera. The unpacked data is returned as a dictionary and will serve as the observation when step
is called.
def action_packing(self, action):
"""
Pack the action into a message format suitable for LCM.
:param action: The action to be packed.
:return: A dictionary containing the packed action.
"""
return {
'Franka/joint_group_command/sim': pack.joint_group_command(
name="all",
cmd=action,
),
}
def observation_unpacking(self, observation):
"""
Unpack the observation from the message format.
:param observation: The observation to be unpacked.
:return: The unpacked observation.
"""
joint_states = unpack.joint_state(observation['Franka/joint_states/sim'])
camera_rgb, camera_depth = unpack.rgbd(observation['camera/rgbd/sim'])
gripper_camera_rgb, gripper_camera_depth = unpack.rgbd(observation['gripper_camera/rgbd/sim'])
return {"joint_states": joint_states,
"camera_rgb": camera_rgb,
"camera_depth": camera_depth,
"gripper_camera_rgb": gripper_camera_rgb,
"gripper_camera_depth": gripper_camera_depth}
The reset_objects
function resets the robot to the initial configuration defined in the YAML file, ensuring that the robot starts in its predefined pose. It also resets the position and orientation of any objects that were loaded into the simulation. This function is typically called when the environment is reset, allowing for consistent starting conditions.
def reset_objects(self):
"""
Reset the objects in the environment.
This method is called when the environment is reset.
"""
# Implement your logic to reset objects in the environment
print("Resetting objects in the Franka environment.")
self.reset_component("Franka")
# Randomly reset the positions of the objects
self.reset_component("BluePlate", base_position=[random.uniform(-0.2, -0.15), random.uniform(0.1, 0.2), 0.7])
self.reset_component("Mug", base_position=[random.uniform(-0.2, -0.15), random.uniform(-0.2, -0.1), 0.7])
Now that you’ve set up the simulation environment and PS4 controller, you can begin collecting your data. A toy example is provided in the ps4control_gym_example.py
file, which demonstrates how to save key data such as the robot's joint states, the end effector's position and orientation (in quaternion), and RGB and depth images from both the main camera and the gripper camera. You can modify this example to suit your needs and collect additional datasets as required for your specific use case.
import os
import pickle
from scripts.franka_env import FrankaEnv
from nodes.ps4policy import ExpertPolicyPS4
SIM = True # Set to False if you want to use the real robot
CONFIG = 'config/global_config.yaml'
DATA_SAVE_PATH = 'data/' # Path to save the collected data
if not os.path.exists(DATA_SAVE_PATH):
os.makedirs(DATA_SAVE_PATH)
# Set the number of trajectories you want to collect
num_traj = 2
# Initialize the Franka environment
env = FrankaEnv(sim=SIM, config=CONFIG)
# Initialize PS4 controller policy
ps4policy = ExpertPolicyPS4(node_name="ps4policy", global_config=CONFIG)
for i in range(num_traj):
print(f"Collecting trajectory {i}...")
trajectory = []
obs, _ = env.reset()
ps4policy.reset()
start_controller = False
trajectory.append({'joint_states': obs['joint_states'][2],
'eef_position': ps4policy.initial_ee_position,
'eef_orientation': ps4policy.initial_ee_orientation,
'camera_rgb': obs['camera_rgb'],
'camera_depth': obs['camera_depth'],
'gripper_camera_rgb': obs['gripper_camera_rgb'],
'gripper_camera_depth': obs['gripper_camera_depth']
}
)
while start_controller is False:
print(f"Press Cross ❌ on the PS4 controller to start...")
start_controller = ps4policy.button_states['Cross (X)']
ps4policy.unlocked = True
# After pressing Cross (X), the controller is unlocked and ready to collect data
print("Controller started, collecting data...")
finished_rollout = False
# Get the intial state from the environment
q = obs['joint_states'][2].tolist() ## joint 1-7, finger 1-2
q = q[:7] + [0] + q[-2:] + [0] * 2 ## joint 1-7, hand joint, finger 1-2, camera joint, tcp joint
while finished_rollout is False:
# Get the action from the PS4 controller
new_q, gripper_position, save_transition = ps4policy.get_action(q)
# Execute the action in the environment, defined in the franka.yaml
action = new_q[:7] + [gripper_position]
obs, _, _, _, _ = env.step(action)
## Get new end-effector position and orientation
q = obs['joint_states'][2].tolist()
q = q[:7] + [0] + q[-2:] + [0] * 2
new_ee_position, new_ee_orientation = ps4policy.forward_kinematics(q)
if save_transition:
trajectory.append({'joint_states': obs['joint_states'][2],
'eef_position': new_ee_position,
'eef_orientation': new_ee_orientation,
'camera_rgb': obs['camera_rgb'],
'camera_depth': obs['camera_depth'],
'gripper_camera_rgb': obs['gripper_camera_rgb'],
'gripper_camera_depth': obs['gripper_camera_depth']
}
)
if ps4policy.button_states["PS Button"]:
print(f"PS 🎮 Button pressed, saving trajectory.")
finished_rollout = True
with open(os.path.join(DATA_SAVE_PATH, f'trajectory_{i}.pkl'), 'wb') as f:
pickle.dump(trajectory, f)
To start the simulation, follow these steps:
In the first terminal, run the command to start the Ark registry:
ark registry start
In the second terminal, run the PS4 controller node in the arkinterfaces
folder:
python ps4controller.py
In the third terminal, start the simulation node in the examples
folder:
python sim_node.py
Finally, in the fourth terminal, run the data collection example in the examples
folder:
python ps4control_gym_example.py
Once you’ve started these processes, the PyBullet GUI will appear with all the loaded objects. Follow the instructions in the terminal to begin collecting your data. Enjoy!
https://github.com/Robotics-Ark/ark_interfaces
Table of Contents