Skip to content

Basis for FeatureNN and LinReLU #6

@patrick-john-ramos

Description

@patrick-john-ramos

Hello! According to Section 3 of the NAM paper,

Feature nets in NAMs are selected amongst (1) DNNs containing 3 hidden layers with 64, 64 and 32 units and ReLU activation, and (2) single hidden layer NNs with1024 ExU units and ReLU-1 activation

However, the feature nets described in FeatureNN use either a ExU layer or a LinReLU layer followed by more LinReLU layers topped off with a standard Linear layer. May I ask:

  1. What was the basis of the this feature net architecture?

  2. What was the basis for the LinReLU layer? I understand that this LinReLU layer is similar to the ExU layer described in the paper, but without the exponential, but where did this come about?

I do apologize if the answers are already in the paper and I just overlooked them while reading it!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions