You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/manual/learning.md
+47-8
Original file line number
Diff line number
Diff line change
@@ -1,34 +1,73 @@
1
-
# [Learning PCs](@id man-learning)
1
+
# [Learning](@id man-learning)
2
2
3
3
In this section we provide few learning scenarios for circuits. In general, learning tasks for PCs can be separted into two categories: paramter learning and structure learning.
4
4
5
-
### Paramter Learning
6
5
7
-
Given a fixed structure for the PC and the dataset, the goal of paramter learning is to estimate the parameters so that likelihood is maximized.
6
+
## Learn a Circuit
7
+
8
+
You can use [`learn_circuit`](@ref) to learn a probabilistic circuit from the data (both paramter and structure learning).
"Train log-likelihood is $(log_likelihood_avg(pc, train_x))"
26
64
```
27
65
28
66
As we see the likelihood improved, however we are still using a fully factorized distribution. There is room for improvement. For example, we can choose initial structure based on Chow-Liu Trees.
Copy file name to clipboardExpand all lines: docs/src/manual/queries.md
+58-11
Original file line number
Diff line number
Diff line change
@@ -43,12 +43,12 @@ First, we randomly make some features go `missing`:
43
43
44
44
```@example queries
45
45
using DataFrames
46
-
function make_missing(d::DataFrame;keep_prob=0.8)
47
-
m = missings(Bool, num_examples(d), num_features(d))
46
+
function make_missing(d::DataFrame;keep_prob=0.8)
47
+
m = missings(Bool, num_examples(d), num_features(d))
48
48
flag = rand(num_examples(d), num_features(d)) .<= keep_prob
49
-
m[flag] .= Matrix(d)[flag]
50
-
DataFrame(m)
51
-
end;
49
+
m[flag] .= Matrix(d)[flag]
50
+
DataFrame(m)
51
+
end;
52
52
data_miss = make_missing(data[1:1000,:]);
53
53
nothing #hide
54
54
```
@@ -72,7 +72,8 @@ probs_mar ≈ probs_evi
72
72
73
73
## Conditionals (CON)
74
74
75
-
In this case, given observed features ``x^o``, we would like to compute ``p(Q \mid x^o)``, where ``Q`` is a subset of features disjoint with ``x^o``. We can leverage Bayes rule to compute conditionals as two seperate MAR queries as follows:
75
+
In this case, given observed features ``x^o``, we would like to compute ``p(Q \mid x^o)``, where ``Q`` is a subset of features disjoint with ``x^o``.
76
+
We can use Bayes rule to compute conditionals as two seperate MAR queries as follows:
76
77
77
78
```math
78
79
p(q \mid x^o) = \cfrac{p(q, x^o)}{p(x^o)}
@@ -84,17 +85,63 @@ Currently, this has to be done manually by the user. We plan to add a simple API
84
85
85
86
In this case, given the observed features ``x^o`` the goal is to fill out the missing features in a way that ``p(x^m, x^o)`` is maximized.
86
87
87
-
88
-
We can use the [`MAP`](@ref) method to compute MAP, which outputs the states that maximize the probability and returns the probabilities themselves.
88
+
We can use the [`MAP`](@ref) method to compute MAP, which outputs the states that maximize the probability and the log-likelihoods of each state.
89
89
90
90
```@example queries
91
91
data_miss = make_missing(data,keep_prob=0.5);
92
92
states, probs = MAP(pc, data_miss);
93
93
probs[1:3]
94
94
```
95
95
96
-
## Probability of logical Events
96
+
## Sampling
97
+
98
+
We can also sample from the distrubtion ``p(x)`` defined by a Probabilistic Circuit. You can use [`sample`](@ref) to achieve this task.
99
+
100
+
```@example queries
101
+
samples, lls = sample(pc, 100);
102
+
lls[1:3]
103
+
```
104
+
105
+
Additionally, we can do conditional samples ``x \sim p(x \mid x^o)``, where ``x^o`` are the observed features (``x^o \subseteq x``), and could be any arbitrary subset of features.
Expected Prediction (EXP) is the task of taking expectation of a discrimintative model w.r.t a generative model conditioned on evidemce (subset of features observed).
0 commit comments