{"id":901,"date":"2018-08-13T13:47:22","date_gmt":"2018-08-13T21:47:22","guid":{"rendered":"http:\/\/depts.washington.edu\/uwrainlab\/?page_id=901"},"modified":"2018-08-17T12:45:22","modified_gmt":"2018-08-17T20:45:22","slug":"global-convergence-of-policy-gradient-methods-for-the-linear-quadratic-regulator","status":"publish","type":"page","link":"http:\/\/depts.washington.edu\/uwrainlab\/global-convergence-of-policy-gradient-methods-for-the-linear-quadratic-regulator\/","title":{"rendered":"Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator"},"content":{"rendered":"<p><strong>M. Fazel, R. Ge, S. Kakade , M. Mesbahi<\/strong><\/p>\n<p><strong>International Conference on Machine Learning<\/strong><\/p>\n<div class=\"gs_scl\">\n<div id=\"gsc_vcd_descr\" class=\"gsc_vcd_value\">Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model, 2) they are an \u201cend-to-end\u201d approach, directly optimizing the performance metric of interest, 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their ef\ufb01ciency from both computational and statistical perspectives. In contrast, system identi\ufb01cation and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are ef\ufb01cient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.<\/div>\n<\/div>\n<div class=\"gs_scl\"><\/div>\n<p><strong>Links:<\/strong><\/p>\n<p><a href=\"https:\/\/icml.cc\/Conferences\/2018\/Schedule?showEvent=3197\"><img loading=\"lazy\" class=\"alignnone wp-image-810\" src=\"http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/download.png\" alt=\"\" width=\"26\" height=\"26\" srcset=\"http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/download.png 225w, http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/download-150x150.png 150w\" sizes=\"(max-width: 26px) 100vw, 26px\" \/><\/a> \u00a0 <a href=\"http:\/\/proceedings.mlr.press\/v80\/fazel18a\/fazel18a.pdf\"><img loading=\"lazy\" class=\"alignnone wp-image-811\" src=\"http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/image_preview.png\" alt=\"\" width=\"31\" height=\"31\" srcset=\"http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/image_preview.png 250w, http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/image_preview-150x150.png 150w\" sizes=\"(max-width: 31px) 100vw, 31px\" \/><\/a> \u00a0 <a href=\"https:\/\/scholar.google.com\/scholar?hl=en&amp;as_sdt=0%2C48&amp;q=Global+Convergence+of+Policy+Gradient+Methods+for+the+Linear+Quadratic+Regulator&amp;btnG=#d=gs_cit&amp;p=&amp;u=%2Fscholar%3Fq%3Dinfo%3A234xRmGEwjYJ%3Ascholar.google.com%2F%26output%3Dcite%26scirp%3D0%26hl%3Den\"><img loading=\"lazy\" class=\"alignnone wp-image-809\" src=\"http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/BibTeX_logo.svg_-300x97.png\" alt=\"\" width=\"65\" height=\"21\" srcset=\"http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/BibTeX_logo.svg_-300x97.png 300w, http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/BibTeX_logo.svg_-768x248.png 768w, http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/BibTeX_logo.svg_-1024x330.png 1024w, http:\/\/depts.washington.edu\/uwrainlab\/wordpress\/wp-content\/uploads\/2018\/07\/BibTeX_logo.svg_.png 1200w\" sizes=\"(max-width: 65px) 100vw, 65px\" \/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>M. Fazel, R. Ge, S. Kakade , M. Mesbahi International Conference on Machine Learning Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model, 2) they are an \u201cend-to-end\u201d approach, directly optimizing [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/pages\/901"}],"collection":[{"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/comments?post=901"}],"version-history":[{"count":5,"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/pages\/901\/revisions"}],"predecessor-version":[{"id":974,"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/pages\/901\/revisions\/974"}],"wp:attachment":[{"href":"http:\/\/depts.washington.edu\/uwrainlab\/wp-json\/wp\/v2\/media?parent=901"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}