Building on top of previous tutorial: Setting up an Ark Envrioment
This tutorial, we walk through how to interact with the FrankaEnv
environment from Ark. This interface is built to resemble OpenAI Gym, enabling easy integration with reinforcement learning and control pipelines.
Before starting, ensure:
FrankaEnv
module is available.config/global_config.yaml
).from scripts.franka_env import FrankaEnv
import time
FrankaEnv
: The main environment class that wraps the Franka robot or its simulation.time
: Used to introduce delays if needed (not mandatory for basic control).SIM = True # Set to False if you want to use the real robot
CONFIG = 'config/global_config.yaml'
SIM
: Flag to toggle between simulation (True
) and real robot (False
).CONFIG
: Path to the YAML config that defines robot behavior, controller settings, etc.env = FrankaEnv(sim=SIM, config=CONFIG)
This creates an instance of the environment. Internally, it:
observation, info = env.reset()
observation
: Initial state of the robot.info
: Optional debug or metadata.def policy(observation):
...
return action
This is a placeholder for your control policy. It takes in the current observation
and returns an action
:
Could be a hard-coded rule, inverse kinematics output, or a learned neural policy.
Example:
def policy(obs):
return [random.uniform(-3, 3) for _ in range(9)]
for _ in range(1000):
action = policy(observation)
observation, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
observation, info = env.reset()
env.step(action)
:
observation
: Next state.reward
: Optional scalar for RL use.terminated
: Episode ended normally.truncated
: Episode was cut off due to time or safety.info
: Extra diagnostics.You need to run two nodes the simulation:
You should see the Franka move randomly or according to the policy you have defined.
https://github.com/Robotics-Ark/franka_gym_example
Table of Contents