Skip to content

Latest commit

 

History

History
193 lines (119 loc) · 6.25 KB

README.md

File metadata and controls

193 lines (119 loc) · 6.25 KB

Notes

Navigation

Disclaimer

  • This is a note on how I interpret every tutorials, articles, and books that I've read.
  • All sources will be cited, if not cited means it's based on my personal experience
  • There could be an information distortion, so I would be very grateful if you could tell me or let me know via twitter or linkedin

A

🔙 Back

B

  • Boltzmann Machine
  • Backward Pass
    • Call optimizer.zerograd() after each .step() prevent accumulating the gradient in .backward().

🔙 Back

C

🔙 Back

D

  • DCGAN
    • print(netD.main[5].weight.size()) | torch.Size([256, 128, 4, 4]) artinya 256 feature maps out, 128 feature maps in, kernel 4x4
    • Every iteration convolution put different result for every feature maps
    • if Loss D is near zero and Loss G still high means the generator generate garbage
    • Loss G 🔺 = fooling D with garbage, Loss D 🔻 = doesn't learn anything
    • Loss G 🔻 = generate good image, Loss D 🔻 = can distinguish fake n real
    • D(x) - the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Why? It's because initially the discriminator know how to predict the real one (output's mean = 1) and then start to confused by the weight produced by the discriminator while training on the fake batch.
    • D(G(z)) - average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Why? It's because initially the discriminator know how to predict the fake one (output's mean = 0) and then start to confused, cause the generator can produce almost as good as the real one.

🔙 Back

E

🔙 Back

F

🔙 Back

G

  • Training on GPU
    • I found Tensorflow can harness more GPU power then PyTorch while training DCGAN using their tutorial's code

🔙 Back

H

  • Hook PyTorch
    • First create function for hook, then create model, then register hook

🔙 Back

I

🔙 Back

J

🔙 Back

K

🔙 Back

L

  • Latent Space
    • Latent Space is a compressed representation from certain dataset.

🔙 Back

M

🔙 Back

N

  • Neuroscience
    • If your cells can turn into eyeballs or teeth, probably your cells can do backpropagation or something similar like backpropagation [YouTube:Preserve Knowledge]

🔙 Back

O

🔙 Back

P

  • P Value

    • p-value, the probability getting the current/original idea is TRUE or correct
    • The lower the P-value is the more significant your independent variable is going to be, the more impact on the dependent variable. <5% highly significant, >5% less significant
  • Polynomial Linear Regression

    • Rven though the relation between x and y is non linear you can use Polynomial Linear Regression

🔙 Back

Q

🔙 Back

R

  • R

    • name space seperated using dot
  • Preview .md files in vscode

  • Reactjs Concepts

    • Split component as needed, and naming props from the component’s own point of view rather than the context in which it is being used. React Doc

🔙 Back

S

  • Data Security in ML

    • Even with decentralized deep learning, GAN can generate protypical samples of targeted data. [src: Arxiv]
  • Sparse coding

    • nan
  • Spyder

    • An object cannot be viewed in spyder

🔙 Back

T

🔙 Back

U

  • Unbalanced Data

V

🔙 Back

W

🔙 Back

X

🔙 Back

Y

🔙 Back

Z

🔙 Back