An example giving model #105
Replies: 4 comments
-
|
Hi Ariel, thanks for writing! For (1), you can use the Alternatively, you can use the What's the difference? Well, (If you tried this with For (2), based on my understanding of what you want to do, I would recommend that you write your teacher simulation as a Python loop that repeatedly calls a student model, iteratively updating its belief. To make this work, you would want to set up the student's model to take the prior as one of its inputs. So abstractly, your loop would look like: prior = uniform_prior
while True:
n = ... # (somehow choose n)
posterior = student(prior, n)
if is_confident_enough(posterior):
break
else:
prior = posteriorDoes that make sense? |
Beta Was this translation helpful? Give feedback.
-
|
Hi Ariel! Just checking in — did that help you solve your problem? Please feel free to ask follow-up questions if my response wasn't clear, or if more issues come up! :) |
Beta Was this translation helpful? Give feedback.
-
|
Hi Kartik! Thanks so much for your answer! and sorry for the late response, Iv'e been busy with another project and didn't get to try implementing your suggestions. I completely understand the 'observes_that' part, and thanks! it helped me in making progress simulating a learner that learns from examples. However, I am still struggling in modeling learning from more than one examples. For context, programming it outside of memo is very simple: likelihood is computed using size principle, multiplied by prior and being normalized: However, when trying to model it inside memo, I struggle with the 'chooses x in y' syntax, because it require me to have a representation of all possible example sets that the teacher could have given! However, it is only usable for up to 2 or 3 examples. With more than that, 'possible_examples_sets' just becomes huge and collapses my IDE. I'm sure there is a way to model this in memo without having an explicit representation of all possible messages the teacher could have had. Maybe it's my lack of of experience with PPLs, but I feel that the 'chooses X in Y' syntax forces me to have a representation of all possible y values to be able to update my posterior given a specific y. In the non PPL-based bayesian updating function I pasted above, there is no need to know all possible example sets before updating the posterior based on a specific example set. Regarding the while loop – thanks! that's a great idea, and indeed that's exactly the implementation I used before I tried translating my model to memo. However, I was hoping there is some way to have this while loop or other iterative message choice process within a memo function, so that I could start having a recursive teacher-learner relationship. Thanks again for all the help and for memo! |
Beta Was this translation helpful? Give feedback.
-
|
Hi Ariel, Try this: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm doing research on (social) concept learning and teaching with examples (Shafto-style). As a first step, I tried to implement concept learning in a very simple environment - Tenenbaum's number game, where the hypothesis space is only intervals of up to 20 on a 100 points number line
It returns the full matrix of size hypothesis space*n – which is great! But sometimes for simulations I want to only condition on a specific value that the teacher gave and only get the posterior probability for this value (for example, condition on teacher.n == i). Now, I just take learner()[:,i] , but it seems suboptimal to compute the entire matrix when I only need one row. What’s the memo syntax for conditioning on a specific value?
Thanks!
Ariel
Beta Was this translation helpful? Give feedback.
All reactions