Skip to content

Commit 82707e8

Browse files
Fix doc build errors (#1072)
* Fix links * Fix README.md links * Make tutorial runable * Drop blank page
1 parent 184d441 commit 82707e8

File tree

5 files changed

+7
-16
lines changed

5 files changed

+7
-16
lines changed

.github/workflows/ci.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,7 @@ jobs:
234234
Pkg.develop(path="../src/ReinforcementLearningEnvironments")
235235
Pkg.develop(path="../") # ReinforcementLearning meta-package
236236
Pkg.develop(path="../src/ReinforcementLearningFarm")
237-
include("make.jl")' skiplinks # Temporarily skip broken link checks
237+
include("make.jl")' # Temporarily skip broken link checks
238238
mv build homepage/__site/docs
239239
- name: Deploy to the main repo
240240
uses: peaceiris/actions-gh-pages@v3

README.md

+4-14
Original file line numberDiff line numberDiff line change
@@ -54,13 +54,13 @@ The above simple example demonstrates four core components in a general
5454
reinforcement learning experiment:
5555

5656
- **Policy**. The
57-
[`RandomPolicy`](https://juliareinforcementlearning.github.io/docs/rlcore/#ReinforcementLearningCore.RandomPolicy)
57+
[`RandomPolicy`](https://juliareinforcementlearning.org/docs/rlcore/#ReinforcementLearningCore.RandomPolicy)
5858
is the simplest instance of
59-
[`AbstractPolicy`](https://juliareinforcementlearning.github.io/docs/rlbase/#ReinforcementLearningBase.AbstractPolicy).
59+
[`AbstractPolicy`](https://juliareinforcementlearning.org/docs/rlbase/#ReinforcementLearningBase.AbstractPolicy).
6060
It generates a random action at each step.
6161

6262
- **Environment**. The
63-
[`CartPoleEnv`](https://juliareinforcementlearning.org/docs/rlenvs/#ReinforcementLearningEnvironments.CartPoleEnv-Tuple{})
63+
[`CartPoleEnv`](https://juliareinforcementlearning.org/docs/rlenvs/#ReinforcementLearningEnvironments.CartPoleEnv-Tuple%7B%7D)
6464
is a typical
6565
[`AbstractEnv`](https://juliareinforcementlearning.org/docs/rlbase/#ReinforcementLearningBase.AbstractEnv)
6666
to test reinforcement learning algorithms.
@@ -82,17 +82,7 @@ write [blog](https://juliareinforcementlearning.org/blog/) occasionally to
8282
explain the implementation details of some algorithms. Among them, the most
8383
recommended one is [*An Introduction to
8484
ReinforcementLearning.jl*](https://juliareinforcementlearning.org/blog/an_introduction_to_reinforcement_learning_jl_design_implementations_thoughts/),
85-
which explains the design idea of this package. Besides, a collection of
86-
[experiments](https://juliareinforcementlearning.org/docs/experiments/) are also provided to help you understand how to train
87-
or evaluate policies, tune parameters, log intermediate data, load or save
88-
parameters, plot results and record videos. For example:
89-
90-
<!-- ```@raw html -->
91-
<img
92-
src="https://github.com/JuliaReinforcementLearning/ReinforcementLearning.jl/raw/main/docs/src/assets/JuliaRL_BasicDQN_CartPole.gif?sanitize=true"
93-
width="600px">
94-
95-
<!--
85+
which explains the design idea of this package.
9686

9787
## 🙋 Why ReinforcementLearning.jl?
9888

docs/make.jl

-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,6 @@ makedocs(
3737
"How to write a customized environment?" => "How_to_write_a_customized_environment.md",
3838
"How to implement a new algorithm?" => "How_to_implement_a_new_algorithm.md",
3939
"How to use hooks?" => "How_to_use_hooks.md",
40-
"Which algorithm should I use?" => "Which_algorithm_should_I_use.md",
4140
"Episodic vs. Non-episodic environments" => "non_episodic.md",
4241
],
4342
"FAQ" => "FAQ.md",

docs/src/Which_algorithm_should_I_use.md

Whitespace-only changes.

docs/src/tutorial.md

+2
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,8 @@ estimate the estimated value of each state-action pair and an explorer to select
5858
which action to take based on the result of the state-action values.
5959

6060
```@repl randomwalk1d
61+
NS = length(S)
62+
NA = length(A)
6163
policy = QBasedPolicy(
6264
learner = TDLearner(
6365
TabularQApproximator(

0 commit comments

Comments
 (0)