Skip to content

Conversation

@sgreenbury
Copy link
Collaborator

@sgreenbury sgreenbury commented Oct 2, 2025

Closes #868.

This PR adds experimental functionality aiming to demonstrate an implementation of an emulator that can be trained on alternative domains (here that's $y \in [0, 1]$):

  • A zero-one inflated beta distribution subclassing torch.distributions.Distribution
  • Adds experimental emulator subclassing MLP to return a zero-one inflated beta distribution for predictions.

There is a small non-breaking change to the MLP class not in experimental to enable customisable parameters to be specificed for the last layer. Here that's to specify that 5 params per prediction are needed for the zero-one inflated beta.

@sgreenbury sgreenbury marked this pull request as ready for review October 20, 2025 10:37
@sgreenbury sgreenbury requested a review from radka-j October 20, 2025 10:38
@sgreenbury sgreenbury merged commit 0f25b70 into main Oct 20, 2025
5 checks passed
@sgreenbury sgreenbury deleted the 868-beta-model branch October 20, 2025 14:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement an emulator to support alternative domains such as [0, 1]

3 participants