教程集 www.jiaochengji.com
教程集 >  Golang编程  >  golang教程  >  正文 Alpha Go 的原理浅析(深度学习与强化学习的融合)

Alpha Go 的原理浅析(深度学习与强化学习的融合)

发布时间:2021-12-20   编辑:jiaochengji.com
教程集为您提供Alpha Go 的原理浅析(深度学习与强化学习的融合)等资源,欢迎您收藏本站,我们将为您提供最新的Alpha Go 的原理浅析(深度学习与强化学习的融合)资源
<h1 id="前言">前言</h1>

这两天网络上被Alpha Go和李世石的围棋对决刷屏,目前Alpha Go已经2-0领先。从前年开始关注研发这套系统的DeepMind公司,一直在追终他们最新的论文。深度学习这个词随着这一事件又火了一把,可是我想说的是虽然阿尔法狗确实得益于深层神经网络的发展,但是其能超过以往的围棋AI的原因是他将深度学习和强化学习结合在一起使用,再融合传统围棋AI主要使用的蒙特卡洛树搜索方法。要知道只用深度学习的方法去解决围棋问题在几年前已经有很多人在尝试,其效果却往往还不及当前最好的围棋AI(例如:Zen,Pachi等)

去年这家公司做的最引爆世界的事情是在Arcade Learning Environment中战胜了人类专家并且在nature上也发。这个环境是为了评估各种强化学习(Reinforcement Learning)算法而建立的,其中有超过500个视频游戏,都是类似小时候在小霸王上玩的飞机坦克打砖块这些游戏。类比人来说的话,他们利用深度学习作为人的视觉系统,接收游戏画面转换为大脑需要的信息,强化学习作为大脑接收传来的信息进行决策,输出当前最优策略。

由于我自己研究的方向是强化学习方面,也利用深度学习做过简单应用,个人感觉深度学习和强化学习作为机器学习的最吸引人的两个方向,结合在一起之后很有可能找到人工智能最后的圣杯。下面根据

<h1 id="mastering-the-game-of-go-with-deep-neural-networks-and-tree-search"> Mastering the game of Go with deep neural networks and tree search</h1> <h2 id="1简介">1.简介</h2>

在棋类游戏中,当棋手走得最完美的时候,都有一个最优的值函数(optimal value function)通过棋盘上每个棋子的位置(也可以说是当前棋盘的状态),来计算这盘游戏的收益(outcome)。这些游戏可以在搜索树中递归地计算最优的值函数来解决,搜索树包含几乎有 <span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-1-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-1" style="width: 1.009em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.947em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.135em 1000em 2.393em -0.5em); top: -2.198em; left: 0.003em;"><span class="mrow" id="MathJax-Span-2"><span class="msubsup" id="MathJax-Span-3"><span style="display: inline-block; position: relative; width: 0.884em; height: 0px;"><span style="position: absolute; clip: rect(1.638em 1000em 2.708em -0.5em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-4" style="font-family: MathJax_Math-italic;">b</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.701em; left: 0.443em;"><span class="mi" id="MathJax-Span-5" style="font-size: 70.7%; font-family: MathJax_Math-italic;">d<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span></span><span style="display: inline-block; width: 0px; height: 2.204em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.137em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-1"></script> 个可能的走法,其中<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-2-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-6" style="width: 0.506em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.443em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.638em 1000em 2.708em -0.5em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-7"><span class="mi" id="MathJax-Span-8" style="font-family: MathJax_Math-italic;">b</span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.87em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-2"></script> 是游戏的宽度即每个位置可能的走法,<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-3-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-9" style="width: 0.632em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.569em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.638em 1000em 2.708em -0.5em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-10"><span class="mi" id="MathJax-Span-11" style="font-family: MathJax_Math-italic;">d<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.87em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-3"></script> 是游戏的深度即游戏进行的长度。那么可以知道国际象棋(chess)要计算 <span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-4-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-12" style="width: 2.142em; display: inline-block;"><span style="display: inline-block; position: relative; width: 2.016em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.638em 1000em 2.896em -0.5em); top: -2.701em; left: 0.003em;"><span class="mrow" id="MathJax-Span-13"><span class="msubsup" id="MathJax-Span-14"><span style="display: inline-block; position: relative; width: 1.45em; height: 0px;"><span style="position: absolute; clip: rect(1.827em 1000em 2.896em -0.5em); top: -2.701em; left: 0.003em;"><span class="mn" id="MathJax-Span-15" style="font-family: MathJax_Main;">35</span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span><span style="position: absolute; top: -2.701em; left: 1.009em;"><span class="mn" id="MathJax-Span-16" style="font-size: 70.7%; font-family: MathJax_Main;">8</span><span style="display: inline-block; width: 0px; height: 2.33em;"/></span></span></span><span class="mn" id="MathJax-Span-17" style="font-family: MathJax_Main;">0</span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.07em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-4"></script>种可能,围棋(Go)要计算<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-5-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-18" style="width: 3.211em; display: inline-block;"><span style="display: inline-block; position: relative; width: 3.022em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.638em 1000em 2.896em -0.5em); top: -2.701em; left: 0.003em;"><span class="mrow" id="MathJax-Span-19"><span class="msubsup" id="MathJax-Span-20"><span style="display: inline-block; position: relative; width: 2.016em; height: 0px;"><span style="position: absolute; clip: rect(1.827em 1000em 2.896em -0.5em); top: -2.701em; left: 0.003em;"><span class="mn" id="MathJax-Span-21" style="font-family: MathJax_Main;">250</span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span><span style="position: absolute; top: -2.701em; left: 1.575em;"><span class="mn" id="MathJax-Span-22" style="font-size: 70.7%; font-family: MathJax_Main;">1</span><span style="display: inline-block; width: 0px; height: 2.33em;"/></span></span></span><span class="mn" id="MathJax-Span-23" style="font-family: MathJax_Main;">50</span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.07em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-5"></script> 种可能,几乎不可能穷尽地搜索。
当前最强的围棋程序都是基于蒙特卡洛树搜索(Monte Carlo tree search),再通过一些能够预测人类专家走法的策略来增强程序。但是这些策略都是很浅显,并且值函数只是基于输入特征的线性组合。
我们使用CNN( convolutional neural networks)从<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-6-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-24" style="width: 3.148em; display: inline-block;"><span style="display: inline-block; position: relative; width: 2.959em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.827em 1000em 2.896em -0.5em); top: -2.701em; left: 0.003em;"><span class="mrow" id="MathJax-Span-25"><span class="mn" id="MathJax-Span-26" style="font-family: MathJax_Main;">19</span><span class="mo" id="MathJax-Span-27" style="font-family: MathJax_Main; padding-left: 0.255em;">∗</span><span class="mn" id="MathJax-Span-28" style="font-family: MathJax_Main; padding-left: 0.255em;">19</span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.87em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-6"></script> 的棋盘图像得到当前棋盘上所有棋子位置的表征,用值函数网络(value network)来评估当前棋局,用策略网络来采样下棋的动作。
总的来说一共有三种deep networks:

<ul><li>监督学习的策略网络( Supervised learning of policy networks,SL,<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-7-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-29" style="width: 1.072em; display: inline-block;"><span style="display: inline-block; position: relative; width: 1.009em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.575em 1000em 2.645em -0.626em); top: -2.198em; left: 0.003em;"><span class="mrow" id="MathJax-Span-30"><span class="msubsup" id="MathJax-Span-31"><span style="display: inline-block; position: relative; width: 1.009em; height: 0px;"><span style="position: absolute; clip: rect(1.89em 1000em 2.896em -0.626em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-32" style="font-family: MathJax_Math-italic;">p</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.009em; left: 0.506em;"><span class="mi" id="MathJax-Span-33" style="font-size: 70.7%; font-family: MathJax_Math-italic;">σ<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span></span><span style="display: inline-block; width: 0px; height: 2.204em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.87em; vertical-align: -0.33em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-7"></script>)
采用监督学习的方法,用人类专家的下棋走法进行训练。这种方式提供了一个快速高效的学习更新过程。
同时再训练一个快速的策略网络 <span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-8-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-34" style="width: 1.072em; display: inline-block;"><span style="display: inline-block; position: relative; width: 1.009em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.575em 1000em 2.645em -0.626em); top: -2.198em; left: 0.003em;"><span class="mrow" id="MathJax-Span-35"><span class="msubsup" id="MathJax-Span-36"><span style="display: inline-block; position: relative; width: 1.009em; height: 0px;"><span style="position: absolute; clip: rect(1.89em 1000em 2.896em -0.626em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-37" style="font-family: MathJax_Math-italic;">p</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.009em; left: 0.506em;"><span class="mi" id="MathJax-Span-38" style="font-size: 70.7%; font-family: MathJax_Math-italic;">π<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span></span><span style="display: inline-block; width: 0px; height: 2.204em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.87em; vertical-align: -0.33em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-8"></script> ,这个网络用于在刚开始时进行动作的快速采样。相当于SL的一个补充。</li> <li>强化学习的策略网络( Reinforcement learning of policy networks,RL, <span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-9-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-39" style="width: 1.009em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.947em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.575em 1000em 2.77em -0.626em); top: -2.198em; left: 0.003em;"><span class="mrow" id="MathJax-Span-40"><span class="msubsup" id="MathJax-Span-41"><span style="display: inline-block; position: relative; width: 0.947em; height: 0px;"><span style="position: absolute; clip: rect(1.89em 1000em 2.896em -0.626em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-42" style="font-family: MathJax_Math-italic;">p</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.009em; left: 0.506em;"><span class="mi" id="MathJax-Span-43" style="font-size: 70.7%; font-family: MathJax_Math-italic;">ρ</span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span></span><span style="display: inline-block; width: 0px; height: 2.204em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.003em; vertical-align: -0.463em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-9"></script>)
RL策略网络能够通过优化自我对弈时的最终收益(outcome)来改进SL策略网络。</li> <li>强化学习的值网络(Reinforcement learning of value networks,<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-10-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-44" style="width: 1.009em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.947em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.575em 1000em 2.582em -0.563em); top: -2.198em; left: 0.003em;"><span class="mrow" id="MathJax-Span-45"><span class="msubsup" id="MathJax-Span-46"><span style="display: inline-block; position: relative; width: 0.884em; height: 0px;"><span style="position: absolute; clip: rect(1.89em 1000em 2.708em -0.563em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-47" style="font-family: MathJax_Math-italic;">v</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.135em; left: 0.506em;"><span class="mi" id="MathJax-Span-48" style="font-size: 70.7%; font-family: MathJax_Math-italic;">θ</span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span></span><span style="display: inline-block; width: 0px; height: 2.204em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.803em; vertical-align: -0.263em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-10"></script>)
他是用来预测RL策略网络与自己对弈时的胜者。</li> </ul><h2 id="2三个主要网络以及网络训练的流程">2.三个主要网络以及网络训练的流程</h2>

<h3 id="1-监督学习的策略网络">1. 监督学习的策略网络</h3>

在训练过程的第一阶段,需要做一个前期工作用来预测人类专家怎么走下一步棋,也就是用监督学习的方法用KGS Go Server上的3千万个棋局,训练建立一个13层的策略网络。其实主要做的是分类工作,最终给训练好的网络一个棋局图片,能够输出下一步棋子所有可能的走法(或者说动作<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-21-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-117" style="width: 0.632em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.569em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.89em 1000em 2.708em -0.5em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-118"><span class="mi" id="MathJax-Span-119" style="font-family: MathJax_Math-italic;">a</span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.603em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-21"></script>)以及其概率(也就是 softmax layer的作用,能够输出在状态<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-22-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-120" style="width: 0.569em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.506em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.89em 1000em 2.708em -0.5em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-121"><span class="mi" id="MathJax-Span-122" style="font-family: MathJax_Math-italic;">s</span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.603em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-22"></script>时动作<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-23-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-123" style="width: 0.632em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.569em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.89em 1000em 2.708em -0.5em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-124"><span class="mi" id="MathJax-Span-125" style="font-family: MathJax_Math-italic;">a</span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.603em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-23"></script>的概率分布)。
SL策略网络是用随机的状态-动作<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-24-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-126" style="width: 2.33em; display: inline-block;"><span style="display: inline-block; position: relative; width: 2.204em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.764em 1000em 3.148em -0.5em); top: -2.701em; left: 0.003em;"><span class="mrow" id="MathJax-Span-127"><span class="mo" id="MathJax-Span-128" style="font-family: MathJax_Main;">(</span><span class="mi" id="MathJax-Span-129" style="font-family: MathJax_Math-italic;">s</span><span class="mo" id="MathJax-Span-130" style="font-family: MathJax_Main;">,</span><span class="mi" id="MathJax-Span-131" style="font-family: MathJax_Math-italic; padding-left: 0.192em;">a</span><span class="mo" id="MathJax-Span-132" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.203em; vertical-align: -0.33em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-24"></script>对进行训练,用随机梯度上升法来逼近人类在状态s选择动作a的可能性,参数<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-25-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-133" style="width: 0.695em; display: inline-block;"><span style="display: inline-block; position: relative; width: 0.632em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.89em 1000em 2.708em -0.563em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-134"><span class="mi" id="MathJax-Span-135" style="font-family: MathJax_Math-italic;">σ<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.603em; vertical-align: -0.063em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-25"></script>更新公式:
<span class="MathJax_Preview"/>

<span class="MathJax" id="MathJax-Element-26-Frame"><nobr><span class="math" id="MathJax-Span-136" style="width: 8.934em; display: inline-block;"><span style="display: inline-block; position: relative; width: 8.431em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.009em 1000em 3.651em -0.5em); top: -2.701em; left: 0.003em;"><span class="mrow" id="MathJax-Span-137"><span class="mi" id="MathJax-Span-138" style="font-family: MathJax_Main;">Δ</span><span class="mi" id="MathJax-Span-139" style="font-family: MathJax_Math-italic;">σ<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span><span class="mo" id="MathJax-Span-140" style="font-family: MathJax_Main; padding-left: 0.255em;">∝</span><span class="mfrac" id="MathJax-Span-141" style="padding-left: 0.255em;"><span style="display: inline-block; position: relative; width: 5.475em; height: 0px; margin-right: 0.129em; margin-left: 0.129em;"><span style="position: absolute; clip: rect(1.764em 1000em 3.148em -0.5em); top: -3.456em; left: 50%; margin-left: -2.638em;"><span class="mrow" id="MathJax-Span-142"><span class="mi" id="MathJax-Span-143" style="font-family: MathJax_Main;">∂<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.066em;"/></span><span class="mi" id="MathJax-Span-144" style="font-family: MathJax_Main; padding-left: 0.192em;">log</span><span class="mo" id="MathJax-Span-145"/><span class="msubsup" id="MathJax-Span-146" style="padding-left: 0.192em;"><span style="display: inline-block; position: relative; width: 1.009em; height: 0px;"><span style="position: absolute; clip: rect(1.89em 1000em 2.896em -0.626em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-147" style="font-family: MathJax_Math-italic;">p</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.009em; left: 0.506em;"><span class="mi" id="MathJax-Span-148" style="font-size: 70.7%; font-family: MathJax_Math-italic;">σ<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span><span class="mo" id="MathJax-Span-149" style="font-family: MathJax_Main;">(</span><span class="mi" id="MathJax-Span-150" style="font-family: MathJax_Math-italic;">a</span><span class="texatom" id="MathJax-Span-151"><span class="mrow" id="MathJax-Span-152"><span class="mo" id="MathJax-Span-153" style="font-family: MathJax_Main;">|</span></span></span><span class="mi" id="MathJax-Span-154" style="font-family: MathJax_Math-italic;">s</span><span class="mo" id="MathJax-Span-155" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span><span style="position: absolute; clip: rect(1.827em 1000em 2.896em -0.5em); top: -2.009em; left: 50%; margin-left: -0.563em;"><span class="mrow" id="MathJax-Span-156"><span class="mi" id="MathJax-Span-157" style="font-family: MathJax_Main;">∂<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.066em;"/></span><span class="mi" id="MathJax-Span-158" style="font-family: MathJax_Math-italic;">σ<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span><span style="position: absolute; clip: rect(0.821em 1000em 1.261em -0.563em); top: -1.255em; left: 0.003em;"><span style="border-left-width: 5.475em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.25px; vertical-align: 0.003em;"/><span style="display: inline-block; width: 0px; height: 1.072em;"/></span></span></span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 2.47em; vertical-align: -0.863em;"/></span></nobr></span>
<script type="math/tex; mode=display" id="MathJax-Element-26"></script>
其精度达到了57.0%,如果只使用未处理的棋局和行棋的历史记录作为输入,也能达到55.7%的精度,超过当前44.4%的最好水平。一般来说大规模的网络能实现更好的精度,但是在搜索时的评估速度变慢,所以需要训练一个结构更简单、运行更快速但精度低一些的策略网络(Rollout Policy)<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-27-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-159" style="width: 3.274em; display: inline-block;"><span style="display: inline-block; position: relative; width: 3.085em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.764em 1000em 3.148em -0.626em); top: -2.701em; left: 0.003em;"><span class="mrow" id="MathJax-Span-160"><span class="msubsup" id="MathJax-Span-161"><span style="display: inline-block; position: relative; width: 1.009em; height: 0px;"><span style="position: absolute; clip: rect(1.89em 1000em 2.896em -0.626em); top: -2.513em; left: 0.003em;"><span class="mi" id="MathJax-Span-162" style="font-family: MathJax_Math-italic;">p</span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span><span style="position: absolute; top: -2.009em; left: 0.506em;"><span class="mi" id="MathJax-Span-163" style="font-size: 70.7%; font-family: MathJax_Math-italic;">π<span style="display: inline-block; overflow: hidden; height: 1px; width: 0.003em;"/></span><span style="display: inline-block; width: 0px; height: 2.267em;"/></span></span></span><span class="mo" id="MathJax-Span-164" style="font-family: MathJax_Main;">(</span><span class="mi" id="MathJax-Span-165" style="font-family: MathJax_Math-italic;">a</span><span class="texatom" id="MathJax-Span-166"><span class="mrow" id="MathJax-Span-167"><span class="mo" id="MathJax-Span-168" style="font-family: MathJax_Main;">|</span></span></span><span class="mi" id="MathJax-Span-169" style="font-family: MathJax_Math-italic;">s</span><span class="mo" id="MathJax-Span-170" style="font-family: MathJax_Main;">)</span></span><span style="display: inline-block; width: 0px; height: 2.708em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 1.203em; vertical-align: -0.33em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-27"></script>,精度为24.2%,选择一个动作的时间只要2<span class="MathJax_Preview"/><span class="MathJax" id="MathJax-Element-28-Frame" role="textbox" aria-readonly="true"><nobr><span class="math" id="MathJax-Span-171" style="width: 1.135em; display: inline-block;"><span style="display: inline-block; position: relative; width: 1.072em; height: 0px; font-size: 106%;"><span style="position: absolute; clip: rect(1.89em 1000em 2.896em -0.563em); top: -2.513em; left: 0.003em;"><span class="mrow" id="MathJax-Span-172"><span class="mi" id="MathJax-Span-173" style="font-family: MathJax_Math-italic;">μ</span><span class="mi" id="MathJax-Span-174" style="font-family: MathJax_Math-italic;">s</span></span><span style="display: inline-block; width: 0px; height: 2.519em;"/></span></span><span style="border-left-width: 0.003em; border-left-style: solid; display: inline-block; overflow: hidden; width: 0px; height: 0.803em; vertical-align: -0.263em;"/></span></nobr></span><script type="math/tex" id="MathJax-Element-28"></script>。

<h3 id="2强化学习的策略网络">2.强化学习的策略网络</h3>

未完待续

<script type="text/javascript"></script>
到此这篇关于“Alpha Go 的原理浅析(深度学习与强化学习的融合)”的文章就介绍到这了,更多文章或继续浏览下面的相关文章,希望大家以后多多支持JQ教程网!

您可能感兴趣的文章:
go 语言学习历程
Alpha Go 的原理浅析(深度学习与强化学习的融合)
python语言能做什么工作
python数据分析是干什么的
想系统学习GO语言(Golang
go html提取纯文本_Go 语言高性能编程
python数据分析用什么软件
没学过编程可以自学python吗
零基础Python学习路线图,Python初学者必须要了解,让你少走弯路
Golang笔记:语法,并发思想,web开发,Go微服务相关

[关闭]
~ ~