Jump to content

User:Philip.Zman/sandbox

From Wikipedia, the free encyclopedia

Proximal Policy Optimization is a family of model-free reinforcement learning algorithms for learning a mapping from states to actions, also referred to as a policy. Similarly to Trust Region Policy Optimization they are based on the policy gradient algorithm and try to extend it by allowing multiple gradient update steps per data sample. It was proposed in 2017 by OpenAI scientists.

Policy Gradient Methods

[edit]

Algorithm

[edit]

Test