当前位置:网站首页>Transfer Learning

Transfer Learning

2022-06-11 06:07:00 Tcoder-l3est

Transfer Learning

Labeled target data and source data

Model Fine-tuning

​ Task describe :

image-20211024195500000

​ characteristic Target data Very few

be called One-shot Learning

example:

Speech recognition ,image-20211024195641592

The voice assistant will say a few words and then

image-20211024195828763

Processing mode

Conservation Training

a large number of source data, To initialize another network Parameters of , And then use target data Fine tuning parameters , be prone to overfitting

image-20211024200138285

new network Equivalent to the old regularization

Training limitations :

  1. Just adjust one layer Parameters , Prevent over fitting

  2. fine-tune the whole network

Which one layer?

Speech recognition : Adjust the first general , near input Of layer

IMAGE: Fixed at the front , transfer Output A few nearby Layer The previous basic feature extraction

image-20211024201020867

fine-tune + Transfer The best effect

Multitask Learning

image-20211024201143361

task a & b Share the same feature:

image-20211024202122594

feature all Cannot share :

image-20211024202131418

Do something in the middle transform

Select the appropriate relevant task

example

Speech recognition , The sound signal is thrown in , Translate into human language :

image-20211024202242799

Together train

Progressive Neural Network

Study first task A then task B

Will you learn B It will affect task A Well ?

image-20211024202928379

Blue output Enter to green input As another task The input of , But again BP It won't be blue when it is , Blue lock

If more task

image-20211024203738869

Unlabeled Target data & Labeled Source Data

For example, handwritten numeral recognition

image-20211024205041353

other image, No, label:

image-20211024210140517

One is train One yes test, It won't work very well , because mismatch

data Of Distribution Is not the same

Domain-adversarial Training

hold source and target Go to the same domain Handle

feature There is no intersection at all

image-20211024210435798

Need feature extractor Try to remove source target Of Different

Cheated domain classifier, It's easy ,Why?

image-20211024210755759

green all output 0 That's it To add more Label predictor Need to meet

image-20211024210918588image-20211024211220319

Domain classifier fails in the end

It should struggle !

Zero-shot Learning

There may be some target stay source Inside Never happened

Speech recognition : Using phonemes ( Phonetic symbols ), Not in words

***Representing each class by its attributes !*** Find unique attributes

Training:

image-20211024212653953

Judgment properties , Not the last direct classification

image-20211024214027100

x1 x2 Through one f Mapping to embedding space then Corresponding properties y1 y2 Also through g Map to the above , If a new one enters X3 The same method can still be used The goal is the result f g As close as possible

But what about x-attributes It is estimated that you may have to rely on database support

image-20211024214349118

modify The farther away from the irrelevant, the better :

image-20211024215850220

K be called margin max(0, Back ) Rear greater than 0 Just can have loss <0 No, loss, When ? When didn't loss ?

image-20211024220009628image-20211024220019090

inner product

image-20211024220208629

There is no attribute : use word vector

Return to zero Learning:

Carry out a combination, It's in the middle ? Identify things you've never seen

image-20211024222045074

Unlabeled Source and Labeled Target Data

self-taught learning

Similar to semi supervised learning , There is a big difference :data It may be irrelevant It's a different kind of

image-20211024223048429

Unlabeled Source and Unlabeled Target Data

Self-taught Clustering

image-20211024223037492 led Target Data

Self-taught Clustering

image-20211024223037492
原网站

版权声明
本文为[Tcoder-l3est]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203020530020056.html