Skip to content

Conversation

@shutovilyaep
Copy link

Registering Random TTNN OPs (PR on top of #1210)

Problem description

Existing TTNN OPs should be registered to Pytorch Dispatcher.
The scope of this PR is Random OPs only, other groups of OPs are planned to be added in other PRs

What's changed

Random wrappers are introduced, Random TTNN OPs are registered

TODO:

  • Add more tests

m.impl("bernoulli", TORCH_FN(tt_eager::ext::unary_random_seeded<ttnn::bernoulli>::invoke));
// schema: bernoulli.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!)
m.impl("bernoulli.out", TORCH_FN(tt_eager::ext::unary_random_seeded<ttnn::bernoulli>::invoke_into));
// bernoulli_.Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we separate out the ones that aren't implemented from the ones that are (I think you did this in one of the other PRs). It makes it clearer where the gaps are

#include <ttnn/operations/rand/rand.hpp>
#include <ttnn/operations/bernoulli/bernoulli.hpp>
#include <ttnn/operations/uniform/uniform.hpp>
#include <ttnn/operations/eltwise/unary/unary.hpp>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed this in some of the other PRs too - is there a reason we include unary.hpp in all of the eager wrapper files?

[[nodiscard]] static at::Tensor& invoke_into(
const at::Tensor& input, c10::optional<at::Generator> generator, at::Tensor& out) {
ttnn::Tensor in_tile = tt_eager::ext::tileify(input);
static thread_local std::mt19937 rng(std::random_device{}());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the random number generator used by PyTorch / tt-metal?

*device,
ttnn::DataType::FLOAT32,
layout,
ttnn::DRAM_MEMORY_CONFIG,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DRAM is good for first cut, but there are big perf implications of DRAM vs. SRAM (L1). We should explore how difficult it is to support SRAM in the future (for all OPs, not just randoms)

}
static inline ttnn::Tensor cast_after_sampling(const ttnn::Tensor& src, at::ScalarType st, bool is_int) {
if (is_int) {
auto floored = ttnn::floor(src);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: names like floored don't make it clear where the original value came from or what is represents

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants