Sequence Generation

Conditional Sequence Generation

Maximizing Expected Reward

Policy Gradient

Conditional GAN

 

Abtractive Summarization

 

[展开全文]

Feature Extraction:

InfoGAN

VAE-GAN

BiGAN

Triple GAN

 

Feature Disentangle  v. 解开

 

 

 

[展开全文]

J-S divergence proplem

 

Wasserstein GAN:

Earth Mover's Distance

 

Lipschitz Function

intractable adj. 棘手的 <==> difficult

 

 

[展开全文]

Theory behind GAN:

Divergence

KL Divergence

sample v. 抽样

J-S divergence

 

[展开全文]

Unsupervised Conditional Generation

  • Direct Transfomation
  • Projection to Common Space:

CycleGAN:

Cycle consistency

[展开全文]

GAN: Generative Adversarial Network

since sliced bread

Disciminator

Step 1: Fix G, update D

Step 1: Fix D, update G

 

Can Generator learn by itself?

 

Auto-encoder

Decoder = Generator

 

Can Discriminator generate?

 

[展开全文]

Why CNN for image

Filter: 3×3

stride 步长

Feature Map

几个 Filter 几个 image

Max Pooling

Deep dream: Exaggerate

Deep style: 

 

[展开全文]

Fat + Short vs. Thin +Tall

Deep ==> Modularization

Why Deep? Training Data 不够

GMM

Univerality Therorem

Analogy

End-to-end Learning

 

[展开全文]

ReLU:

  • Leaky ReLU
  • Parametric ReLU

Maxout: 

ReLU is a special case of Maxout.

Learnable activation function

RMSProp:

Momentum:

RMSProp + Momentum ==> Adam

Regularization:

Dropout

 

[展开全文]

Backpropagation

to compute gradients efficiently

Chain Rule:

dz/dx = dz/dy × dy/dx

  • Forward pass
  • Backward pass

 

 

[展开全文]

Fully Connected Feedforward Network

Output Layer = Multi-class Classifier

Example

 

[展开全文]

Step 1: Function Set

Step 2: Goodness of a Function

Cross Entropy

Step 3: Find the best Function(Gradient Descent)

no squarre error

Discriminative 有时优于 Generative(几率模型:Naive Bayes)

Multi-class Classification

Softmax ==> 0<y<1

Limitation of Logistic Regression

 

[展开全文]

Classificaiton as Regression

Generative Model:

P(x) = 

Gaussian Distribution

Find Maximum Likelihood (mean*, covariance*)

All dimensions are independent ==> Naive Bayes Classifier

σ(z)=1/ (1+exp(-z))

[展开全文]

On-line vs Off-line:

Momentum

Adagrad

RMSProp

Adam

Real Application

 

 

[展开全文]

Adagrad

root mean square

g(gradient): 偏微分

best step: |First derivative| / Second derivative

Stochastic Gradient Descent

Feature Scaling

Taylor Series

 

[展开全文]

error 来源:bias 和 variance

mean: μ

variance: σ^2

s^2 是 σ^2的估测值

E[f*] = f^-: f* 的期望值

简单的模型 Variance 较小,简单的模型受数据波动影响小

复杂模型的 Bias 更小

Regularization ==> 使曲线变平滑6

 

Cross Validation

[展开全文]