Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pseudocode and questions #9

Open
djsamseng opened this issue Nov 8, 2022 · 0 comments
Open

Pseudocode and questions #9

djsamseng opened this issue Nov 8, 2022 · 0 comments

Comments

@djsamseng
Copy link

djsamseng commented Nov 8, 2022

Hey thanks for sharing this work! And I really appreciate the in depth beginner friendly blog post! I was wondering if this pseudocode was

  1. Correct
  2. Helpful to anyone else trying to understand the code

If not feel free to close. But I would appreciate it if you could help me understand a few parts about the code! Thanks!

Questions

  1. How come the environment reward env_reward is unused and reward is entirely dependent on the output of the model? Does this algorithm only learn the expert and never take into account environment reward?
  2. Why is value_loss determined entirely from the model output? Wouldn't this cause the model to collapse?

Pseudocode

def init_network():
  q_net = torch.nn.Linear(state_size, action_size)
  target_net = deepcopy(q_net)
  
def episode_step():
  action = softmax(q_net(state))
  next_state, reward = env.step(action)
  memory.add((state, next_state, action, reward)) # memory = collections.deque
  update_critic(memory, expert_memory)
  target_net = deepcopy(q_net)
  
def update_critic(memory, expert_memory):
  # The idea here is that we backprop both the rewards for the expert's actions and the agent's actions
  # the batch dimension contains examples from the expert and the agent
  state = torch.cat((memory[:][0], expert_memory[:][0]))
  next_state = torch.cat((memory[:][1], expert_memory[:][1]))
  action = torch.cat((memory[:][2], expert_memory[:][2]))
  # v = sum of future rewards for all possible actions given current state
  v = torch.logsumexp(q_net(state), dim=1, keepdim=True)
  # next_v = sum of future rewards for all possible actions given state(t+1)
  next_v = torch.logsumexp(q_net(next_state), dim=1, keepdim=True)
  # q = sum of future rewards predicted given current state, action pair
  q = q_net(state).gather(action) 
  loss = iq_loss(q, v, next_v)
  critic_optimizer.zero_grad()
  loss.backward()
  critic_optimizer.step()
  
def iq_loss(q, v, next_v):
  if done:
    expert_reward = q[where_expert]
    # Why is value_loss determined entirely from the model output? Wouldn't this cause the model to collapse? 
    value_loss = v.mean()
  else:
    expert_reward = (q - next_v)[where_expert]
    value_loss = (v - next_v).mean()
  # Why is this negative?
  expert_reward_loss = -expert_reward.mean()
  loss = reward_loss + value_loss
  return loss
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant