You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi~Thank you very much for sharing your paper and source code !!! I am new to inverse RL and I want to implement your method on the robot recently. About Ant-v2
And I found that the reward for each step in your Ant-v2 expert data is 1. Why set the reward like this? And how to run sqil correctly in your code
About random seeds
I found that the results with different random seeds in the humanoid experiments are very different, some results are around 1500 points, is it because the number of learning steps is only 50000 or the expert data is 1?
I runned with this python train_iq.py env=humanoid agent=sac expert.demos=1 method.loss=v0 method.regularize=True agent.actor_lr=3e-05 seed=0/1/2/3/4/5 agent.init_temp=1
Your work is very valuable and I look forward to your help in solving my doubts.
The text was updated successfully, but these errors were encountered:
XizoB
changed the title
Issue on Ant-v2 and Humanoid-v2 random seed Experiments
Issue on Ant-v2 expertd data and Humanoid-v2 random seed Experiments
Sep 22, 2022
Hi, we only the expert_rewards for SQIL where the expert gets a reward 1 and the policy gets a reward 0. Storing fake rewards of 1 for the expert data makes this easy to implement. Nevertheless, for IQ-Learn we don't use expert rewards and this field is never used.
The stochasticity you observe is likely because of using only 1 expert demo to train on, leading to high variance on the seeds. Trying to reduce the temperature to maybe 0.5 could help with this.
Hi~Thank you very much for sharing your paper and source code !!! I am new to inverse RL and I want to implement your method on the robot recently.
About Ant-v2
About random seeds
I runned with this python train_iq.py env=humanoid agent=sac expert.demos=1 method.loss=v0 method.regularize=True agent.actor_lr=3e-05 seed=0/1/2/3/4/5 agent.init_temp=1
Your work is very valuable and I look forward to your help in solving my doubts.
The text was updated successfully, but these errors were encountered: