欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

强化学习Deep Q-Learning系列在gym上SpaceInvaders游戏的应用

程序员文章站 2023-12-31 14:39:58
...

项目实现了 Nature DQN、Double DQN、Dueling DQN

github 地址 https://github.com/xiaoxu1025/SpaceInvader/

Deep Q-Learning 也就是价值函数的近似表示

Nature DQN、Double DQN、Dueling DQN 都是在Deep  Q-Learning上的优化

下面是Deep Q-Learning的模型实现和 Dueling DQN模型的实现

def create_model(input_shape, action_nums):
    input_state = Input(shape=input_shape)
    input_action = Input(shape=(action_nums,))
    conv1 = Conv2D(16, kernel_size=(7, 7), strides=(4, 4), activation='relu')(input_state)
    conv2 = Conv2D(32, kernel_size=(5, 5), strides=(2, 2), activation='relu')(conv1)
    conv3 = Conv2D(64, kernel_size=(3, 3), strides=(2, 2), activation='relu')(conv2)
    flattened = Flatten()(conv3)
    dense1 = Dense(512, kernel_initializer='glorot_uniform', activation='relu')(flattened)
    dense2 = Dense(256, kernel_initializer='glorot_uniform', activation='relu')(dense1)
    q_values = Dense(action_nums, kernel_initializer='glorot_uniform', activation='tanh')(dense2)
    q_v = dot([q_values, input_action], axes=1)
    model = Model(inputs=[input_state, input_action], outputs=q_v)
    q_values_func = K.function([input_state], [q_values])
    return model, q_values_func


def create_duelingDQN_model(input_shape, action_nums):
    input_state = Input(shape=input_shape)
    input_action = Input(shape=(action_nums,))
    conv1 = Conv2D(16, kernel_size=(7, 7), strides=(4, 4), activation='relu')(input_state)
    conv2 = Conv2D(32, kernel_size=(5, 5), strides=(2, 2), activation='relu')(conv1)
    conv3 = Conv2D(64, kernel_size=(3, 3), strides=(2, 2), activation='relu')(conv2)
    flattened = Flatten()(conv3)
    dense1 = Dense(512, kernel_initializer='glorot_uniform', activation='relu')(flattened)
    dense2 = Dense(256, kernel_initializer='glorot_uniform', activation='relu')(dense1)

    V = Dense(1, kernel_initializer='glorot_uniform')(dense2)

    A = Dense(action_nums, kernel_initializer='glorot_uniform', activation='tanh')(dense2)

    q_values = DuelingLayer()(V, A)

    q_v = dot([q_values, input_action], axes=1)
    model = Model(inputs=[input_state, input_action], outputs=q_v)
    q_values_func = K.function([input_state], [q_values])
    return model, q_values_func

然后是Nature DQN的实现

states, actions, rewards, next_states, is_ends = self.memory.sample(self.batch_size)
states_normal, actions_normal, next_states_normal = self.preprocessor.get_batch_data(states, actions, next_states)

q_values = self.calc_target_q_values_func(next_states_normal)
max_q_values = np.max(q_values, axis=1)

new_rewards = rewards + self.gamma * max_q_values

y = np.where(is_ends, rewards, new_rewards)
y = np.expand_dims(y, axis=1)
loss = self.model.train_on_batch([states, actions_normal], y)

接下来 Double DQN的实现

 # 1 先在当前Q网络中先找出最大Q值对应的动作
q_values = self.calc_q_values_func(next_states_normal)
q_values_actions = np.argmax(q_values, axis=1)

q_values_actions = to_categorical(q_values_actions, self.preprocessor.action_nums)
# 2 然后利用这个选择出来的动作在目标网络里面去计算目标Q值
target_q_values = self.target_model.predict_on_batch([next_states_normal, q_values_actions])

new_rewards = rewards + self.gamma * target_q_values

y = np.where(is_ends, rewards, new_rewards)
y = np.expand_dims(y, axis=1)

loss = self.model.train_on_batch([states, actions_normal], y)

上一篇:

下一篇: