Skip to content

RnDProjectsDeebul/RnDTopics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 

Repository files navigation

RnDTopics

  1. Dirichlet uncertainty and its application to Adversarial and OOD detection

  2. Uncertainty estimation

  3. Label Noise and Dirichlet uncertatinty

  4. Simulated Manipulation with semantic segmentation - Sathwik

  5. Multi - label uncertatiny quantification

    • For multi label classification scenario how does ue methods perform
  6. Generic normal distirbution loss function for regression Nandhini

    • https://en.wikipedia.org/wiki/Generalized_normal_distribution
    • 2 parameter alpha and beta
    • beta == 2 its normal distirbution
    • beta == 1 its laplace distribution
    • beta < 2 tail heavier than normal
    • beta > 2 tail lighter than normal
    • variance is defined, entropy is defined
    • 2 options learn both alpha and beta for all output
    • or learn single beta for a input
    • or learn single beta for a dataset
  7. Surface normal uncertainty Kaushik

  8. Continuous Bernoulli distribution - https://pytorch.org/docs/stable/distributions.html#continuousbernoulli - wiki page - Loss function using continuous Bernoulli distirbution - Not clear need to rethink

  9. Conditional regression/classification (Multi input regression/classification) Gokul

    • In addition to the input image we an add context informtion of task as embeddings
    • For example for object pickpoint regression task which outputs only single object center point
    • If we add the name of the object as input then we can select which center we are looking
    • To compare :
      • How to embedd the object name ? Fixed embdding (ECOC embedding) vs learned embedding ? If learned then how ?
      • Fusion which ? Where to fuse, channel fusion vs layer fusion,
    • Real world or simulation execution
  10. Leaner convolution application to embedded systems and uncertainty estimation

Deep learning projects

  1. Regression loss vs classification loss impact on adversarial attack

    • loss functions : softmax vs binary cross entropy vs mse vs l1
    • embedding : one hot vs error correction
      • Use foolbox and do adersarial attack
    • datasets : cifar 100 , traffic ign
    • segmentation voc
    • open problem : convertin regression output to classification classes
  2. ECOC embedding with radial encoding for regression learning

    • ECOC onverts one hot to hadmards code
    • Hadards code needs to be longer than the one hot encoding
    • Can we encode the classification problem with regression outputs
    • How will be encode : One radial values 0 to 2pi
    • Equally Divide 0 to 2pi with repsect to number of classes
    • Use the Radial loss function of gaussian or laplace
    • How many embedding is required?
    • Datasets : higher number of classes cidar 100 or german traffic
    • Again comparison with classification loss for adversarial attack performance
    • ood performance
  3. Hufmann coding or arithmetic coding

    • Huffman coding or arithmetic coding is done for loselss compression based on the probability of the data
    • So most probable data is encoded with less bits and high probable data is represented with high bits
    • The technique can be applied to based on the confussion of classes
    • So the lesat confusion class can be given less bits
    • and most confussing classes get large bits
    • The idea is that based on the encoding since the most confusing class share most bits except one, based on the decoding we can sure that the class is one of the both but not sure which .
  4. Does multiple layer loss improves training ?

    • Currently we only use loss is calculated using the last layer and labels.
    • We can add fc layer at different levels and calculate loss at different levels
    • Does adding such losses improve the performance?
    • Is it true also for UC evidential loss?
      • Is the loss decreasing at each level ? as more layers is added it should learn more
      • Does this have an impact on reliability, adversarial attack
      • Can this be used for OOD
    • How to combine the different loss (average or weighted average)?
    • For semantic segmentation on can use the upsample o/p and generate these losses as in paper: Multiview deep learning for consistent semantic mapping with RGB-D cameras.
    • For semantic segmentation : How to reduce the label to different sizes:
      • Convert image to one-hot-label
      • Use max pooling to reduce the shape
      • This will make sure in the redcued image pixel - All the values of the above pixel exist.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published