Skip to content

Commit c5c5edc

Browse files
Update default.html
1 parent 8ebae1f commit c5c5edc

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

Diff for: docs/_layouts/default.html

+11
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,9 @@ <h3 id="related-works">Related works:</h3>
5656
<li>
5757
<p>The research results for Neural Network Knowledge Based system for the tasks of collision avoidance is put in separate repository and is available here: <a href="https://github.com/sichkar-valentyn/Matlab_implementation_of_Neural_Networks">Matlab implementation of Neural Networks</a></p>
5858
</li>
59+
<li>
60+
<p>The study of Semantic Web languages OWL and RDF for Knowledge representation of Alarm-Warning System is put in separate repository and is available here: <a href="https://github.com/sichkar-valentyn/Knowledge_Base_Represented_by_Semantic_Web_Language">Knowledge Base Represented by Semantic Web Language</a></p>
61+
</li>
5962
<li>
6063
<p>The study of Neural Networks for Computer Vision in autonomous vehicles and robotics is put in separate repository and is available here: <a href="https://github.com/sichkar-valentyn/Neural_Networks_for_Computer_Vision">Neural Networks for Computer Vision</a></p>
6164
</li>
@@ -136,11 +139,15 @@ <h3 id="rl-q-learning-environment-1-experimental-results"><a name="RL Q-Learning
136139

137140
<p><img src="/Reinforcement_Learning_in_Python/images/Environment-1.gif" alt="Environment-1" width="312" height="341" /> <img src="/Reinforcement_Learning_in_Python/images/Environment-1.png" alt="Environment-1" width="312" height="341" /></p>
138141

142+
<p><br /></p>
143+
139144
<h3 id="q-learning-algorithm-resulted-chart-for-the-environment-1"><a name="Q-learning algorithm resulted chart for the environment-1">Q-learning algorithm resulted chart for the environment-1</a></h3>
140145
<p>Represents number of episodes via number of steps and number of episodes via cost for each episode</p>
141146

142147
<p><img src="/Reinforcement_Learning_in_Python/images/Charts-1.png" alt="RL_Q-Learning_C-1" /></p>
143148

149+
<p><br /></p>
150+
144151
<h3 id="final-q-table-with-values-from-the-final-shortest-route-for-environment-1"><a name="Final Q-table with values from the final shortest route for environment-1">Q-table with values from the final shortest route for environment-1</a></h3>
145152
<p><img src="/Reinforcement_Learning_in_Python/images/Q-Table-E-1.png" alt="RL_Q-Learning_T-1" />
146153
<br />Looking at the values of the table we can see the decision for the next action made by agent (mobile robot). The sequence of final actions to reach the goal after the Q-table is filled with knowledge is the following: <em>down-right-down-down-down-right-down-right-down-right-down-down-right-right-up-up.</em>
@@ -153,11 +160,15 @@ <h3 id="rl-q-learning-environment-2-experimental-results"><a name="RL Q-Learning
153160

154161
<p><img src="/Reinforcement_Learning_in_Python/images/Environment-2.png" alt="RL_Q-Learning_E-2" /></p>
155162

163+
<p><br /></p>
164+
156165
<h3 id="q-learning-algorithm-resulted-chart-for-the-environment-2"><a name="Q-learning algorithm resulted chart for the environment-2">Q-learning algorithm resulted chart for the environment-2</a></h3>
157166
<p>Represents number of episodes via number of steps and number of episodes via cost for each episode</p>
158167

159168
<p><img src="/Reinforcement_Learning_in_Python/images/Charts-2.png" alt="RL_Q-Learning_C-2" /></p>
160169

170+
<p><br /></p>
171+
161172
<h3 id="final-q-table-with-values-from-the-final-shortest-route-for-environment-1-1"><a name="Final Q-table with values from the final shortest route for environment-1">Q-table with values from the final shortest route for environment-1</a></h3>
162173
<p><img src="/Reinforcement_Learning_in_Python/images/Q-Table-E-2.png" alt="RL_Q-Learning_T-2" /></p>
163174

0 commit comments

Comments
 (0)