当前位置:网站首页>[deep learning] (3) encoder mechanism in transformer, complete pytoch code attached
[deep learning] (3) encoder mechanism in transformer, complete pytoch code attached
2022-06-28 22:47:00 【Vertical sir】
Hello everyone , Share with you today Transformer Medium Encoder Some knowledge points involved :Word Embedding、Position Embedding、self_attention_Mask
This blog post is a revision of the previous one 《Transformer Code reappearance 》 Parsing , It is strongly recommended that you take a look at :https://blog.csdn.net/dgvv4/article/details/125491693
because Transformer There are many knowledge points involved in , The next few articles will introduce Decoder Mechanism 、 Loss calculation 、 Actual combat cases, etc .
1. Word Embedding
Word Embedding It can be understood that each word in the sentence can correspond to an eigenvector .
The code of this part is as follows :
First, specify the length of the feature sequence and the target sequence ,src_len=[2, 4] The representative feature sequence contains 2 A sentence , The first sentence has 2 Word , In the second sentence 4 Word .
Appoint The word library size of the sequence is 8, namely All the words in the sequence are in 1~8 Selection between . Next, randomly generate the words contained in each sentence , Get the feature sequence src_seq And target sequence tgt_seq.
Because the length of each sentence is different , For example, feature sequences src_seq The first sentence in has 2 Word , The second sentence has 4 Word . After feeding to Word Embedding Before , It is necessary to unify the length of all sentences , Fill in after the first sentence 2 individual 0, Make two sentences in the feature sequence equal in length .
import torch
from torch import nn
from torch.nn import functional as F
import numpy as np
max_word_idx = 8 # The word base of feature sequence and target sequence consists of 8 Words make up
model_dim = 6 # wordembedding after , Each word has a length of 6 To represent the vector of
# ------------------------------------------------------ #
#(1) Build sequence , The characters of a sequence are represented by an index
# ------------------------------------------------------ #
# Specify the sequence length
src_len = torch.Tensor([2, 4]).to(torch.int32) # The length of the feature sequence is 2
tgt_len = torch.Tensor([4, 3]).to(torch.int32) # The length of the target sequence is 2
# There are kinds of characteristic sequences 2 A sentence , The first sentence contains 2 Word , The second sentence has 4 Word
print(src_len, tgt_len) # tensor([2, 4]) tensor([4, 3])
# Create sequence , A sentence is made up of eight words , use 1~8 To express
src_seq = [ torch.randint(1, max_word_idx, (L,)) for L in src_len ] # Create a feature sequence
tgt_seq = [ torch.randint(1, max_word_idx, (L,)) for L in tgt_len ] # Create target sequence
print(src_seq, tgt_seq)
# [tensor([6, 4]), tensor([6, 4, 1, 7])] # Characteristic sequence , The first sentence has 2 Word , The second sentence has 4 Word
# [tensor([4, 2, 1, 3]), tensor([6, 5, 1])] # Target characteristics , The first sentence has 4 Word , The second sentence has 3 Word
# The length of each sentence is different , It needs filling 0 Become the same length
new_seq = [] # preservation padding The sequence after
for seq in src_seq: # Traverse each sentence in the feature sequence
sent = F.pad(seq, pad=(0, max(src_len)-len(seq))) # Right side filling 0 Make sure all sentences are the same length
sent = torch.unsqueeze(sent, dim=0) # Become a two-dimensional tensor [max_src_len]==>[1, max_src_len]
new_seq.append(sent) # preservation padding The sequence after
for seq in tgt_seq: # Traverse each sentence in the target sequence
sent = F.pad(seq, pad=(0, max(tgt_len)-len(seq)))
sent = torch.unsqueeze(sent, dim=0) # Become a two-dimensional tensor [max_tgt_len]==>[1, max_tgt_len]
new_seq.append(sent) # preservation padding The sequence after
# Because the feature sequence and the target sequence are stored in list in , become tensor type , stay axis=0 Dimension stacking
src_seq = torch.cat(new_seq[:2], dim=0) # Characteristic sequence
tgt_seq = torch.cat(new_seq[2:], dim=0) # Target sequence
print(src_seq, src_seq.shape) # View feature sequence shape=[2,4], There is... In the sequence 2 A sentence , Each sentence 4 Word
print(tgt_seq, tgt_seq.shape) # The target sequence is the same as above
'''
src_seq = [[6, 4, 0, 0], [6, 4, 1, 7]]
src_seq.shape = [2, 4]
tgt_seq = [[4, 2, 1, 3], [6, 5, 1, 0]]
tgt_seq.shape = [2, 4]
'''
# ------------------------------------------------------ #
#(2)word-embadding
# ------------------------------------------------------ #
# Instantiation embedding class , altogether 8 Kind of words , in consideration of padding Filled with 0, So the word list is 9 Kind of , The length of the eigenvector of each word is 6
src_embedding_tabel = nn.Embedding(num_embeddings=max_word_idx+1, embedding_dim=model_dim) # Of a characteristic sequence Embedding
tgt_embedding_tabel = nn.Embedding(num_embeddings=max_word_idx+1, embedding_dim=model_dim) # Of the target sequence Embedding
print(src_embedding_tabel.weight) # shape=[9,6], The first line is assigned to padding=0, The remaining eight lines are classified into 8 Kind of words
print(tgt_embedding_tabel)
# from embedding Get the eigenvector representation of each word in the table , word 0 The eigenvector of is [-1.1004, -1.4062, 1.1152, 0.9054, 1.0759, 1.1679]
src_embedding = src_embedding_tabel(src_seq) # () Represents the forward propagation method using this instance
tgt_embedding = src_embedding_tabel(tgt_seq)
# Print the corresponding... For each sentence embedding tensor , Each line represents the corresponding word in the sentence embedding
print(src_embedding)
# shape=[2,4,6] The target sequence is represented by 2 A sentence , Each sentence has 4 Word , Each word has a length of 6 Vector representation of
print(src_embedding.shape) First of all, our word bank is made up of 1~8 Composed of , There are more in the back padding Of 0 fill , therefore Now there are... In the word bank 9 Kind of , adopt nn.Embedding() by 9 Each word has a length of model_dim=6 Eigenvector of . As the first matrix below , word 0 With the vector [-1.1004, -1.4062, 1.1152, 0.9054, 1.0759, 1.1679] To express .
Next Encode each word in the sequence by forward propagation , See the second matrix below , Such as :src_seq = [[6, 4, 0, 0], [6, 4, 1, 7]] in , The first word 6 With the vector [-0.9194, 0.3338, 0.7215, -1.2306, 0.9512, -0.1863] To express .
Of a characteristic sequence shape From the original [2, 4] become [2, 4, 6], namely There are... In the feature sequence 2 A sentence , Each sentence contains 4 Word , Each word has a length of 6 To represent the vector of .
# src_embedding_tabel.weight
Parameter containing:
tensor([[-1.1004, -1.4062, 1.1152, 0.9054, 1.0759, 1.1679],
[-0.0360, -1.6144, 0.9804, 0.4482, 1.8510, 0.3860],
[ 0.2041, 0.1746, 0.4676, -1.3600, 0.3034, 1.7780],
[ 0.5122, -1.3473, -0.2934, -0.7200, 1.9156, -1.5741],
[ 0.7404, -1.1773, 1.3077, -0.7012, 1.9886, -1.3895],
[-1.8221, -0.7920, 0.9091, 0.4478, -0.3373, -1.5661],
[-0.9194, 0.3338, 0.7215, -1.2306, 0.9512, -0.1863],
[-1.3199, -1.4841, 1.0171, 0.8665, 0.3624, 0.4318],
[-1.7603, -0.5641, 0.3106, -2.7896, 1.6406, 1.9038]],
requires_grad=True)
# src_embedding
Embedding(9, 6)
tensor([[[-0.9194, 0.3338, 0.7215, -1.2306, 0.9512, -0.1863],
[ 0.7404, -1.1773, 1.3077, -0.7012, 1.9886, -1.3895],
[-1.1004, -1.4062, 1.1152, 0.9054, 1.0759, 1.1679],
[-1.1004, -1.4062, 1.1152, 0.9054, 1.0759, 1.1679]],
[[-0.9194, 0.3338, 0.7215, -1.2306, 0.9512, -0.1863],
[ 0.7404, -1.1773, 1.3077, -0.7012, 1.9886, -1.3895],
[-0.0360, -1.6144, 0.9804, 0.4482, 1.8510, 0.3860],
[-1.3199, -1.4841, 1.0171, 0.8665, 0.3624, 0.4318]]],
grad_fn=<EmbeddingBackward>)2. Position Embedding
Attention mechanism focuses more on the importance of words , They don't care about the order of word positions in sentences .
for example :“ The train from Beijing to Jinan ” And “ The train from Jinan to Beijing ”, Word vector representation cannot be applied to “ Beijing ” Distinguish , The code is the same . But in the real context , The two words express different meanings , The first represents the starting station , The other is the terminal , The two words express different semantic information .
So in order to Attention Large scale models based on structure need location coding to assist in learning sequence information .
Transformer The model solves this problem by adding additional position coding to the input vector .Transformer Sinusoidal position coding is used in the model . Using sine and cosine functions to generate position coding information , Add the position coding information to the embedded value of the word , As input to the next layer .
The formula is as follows , among pos On behalf of the line ,i Representative column ,d_model A vector representation of how long each location reference is .
Even columns :
Odd columns :
The code is as follows :
import torch
from torch import nn
from torch.nn import functional as F
import numpy as np
max_word_idx = 8 # The word base of feature sequence and target sequence consists of 8 Words make up
model_dim = 6 # wordembedding after , Each word has a length of 6 To represent the vector of
# ------------------------------------------------------ #
#(1) Build sequence , The characters of a sequence are represented by an index
# ------------------------------------------------------ #
# Specify the sequence length
src_len = torch.Tensor([2, 4]).to(torch.int32) # The length of the feature sequence is 2
tgt_len = torch.Tensor([4, 3]).to(torch.int32) # The length of the target sequence is 2
# There are kinds of characteristic sequences 2 A sentence , The first sentence contains 2 Word , The second sentence has 4 Word
print(src_len, tgt_len) # tensor([2, 4]) tensor([4, 3])
# Create sequence , A sentence is made up of eight words , use 1~8 To express
src_seq = [ torch.randint(1, max_word_idx, (L,)) for L in src_len ] # Create a feature sequence
tgt_seq = [ torch.randint(1, max_word_idx, (L,)) for L in tgt_len ] # Create target sequence
print(src_seq, tgt_seq)
# [tensor([6, 4]), tensor([6, 4, 1, 7])] # Characteristic sequence , The first sentence has 2 Word , The second sentence has 4 Word
# [tensor([4, 2, 1, 3]), tensor([6, 5, 1])] # Target characteristics , The first sentence has 4 Word , The second sentence has 3 Word
# The length of each sentence is different , It needs filling 0 Become the same length
new_seq = [] # preservation padding The sequence after
for seq in src_seq: # Traverse each sentence in the feature sequence
sent = F.pad(seq, pad=(0, max(src_len)-len(seq))) # Right side filling 0 Make sure all sentences are the same length
sent = torch.unsqueeze(sent, dim=0) # Become a two-dimensional tensor [max_src_len]==>[1, max_src_len]
new_seq.append(sent) # preservation padding The sequence after
for seq in tgt_seq: # Traverse each sentence in the target sequence
sent = F.pad(seq, pad=(0, max(tgt_len)-len(seq)))
sent = torch.unsqueeze(sent, dim=0) # Become a two-dimensional tensor [max_tgt_len]==>[1, max_tgt_len]
new_seq.append(sent) # preservation padding The sequence after
# Because the feature sequence and the target sequence are stored in list in , become tensor type , stay axis=0 Dimension stacking
src_seq = torch.cat(new_seq[:2], dim=0) # Characteristic sequence
tgt_seq = torch.cat(new_seq[2:], dim=0) # Target sequence
print(src_seq, src_seq.shape) # View feature sequence shape=[2,4], There is... In the sequence 2 A sentence , Each sentence 4 Word
print(tgt_seq, tgt_seq.shape) # The target sequence is the same as above
'''
src_seq = [[6, 4, 0, 0], [6, 4, 1, 7]]
src_seq.shape = [2, 4]
tgt_seq = [[4, 2, 1, 3], [6, 5, 1, 0]]
tgt_seq.shape = [2, 4]
'''
# ------------------------------------------------------ #
#(2)position-embadding Odd columns use cos, Even columns use sin
# The generalization ability of sine and cosine position coding is strong 、 Have symmetry 、 Of each location embedding Is to determine the
# ------------------------------------------------------ #
# ==1== embedding
# Construct row matrix , pos The length of the corresponding sequence , Each sentence in the feature sequence contains 4 Word
pos_mat = torch.arange(max(src_len)) # The position of each word in the corresponding sentence
# Become a two-dimensional matrix , Every line is the same
pos_mat = torch.reshape(pos_mat, shape=[-1,1]) # shape=[4,1]
print(pos_mat)
# Construct column matrix , Corresponding to 2i/d_model
# Each word has a length of 6 To represent the vector of (d_model=6), and i Represents each column in the eigenvector ,2i Represents even columns
i_mat = torch.arange(0,model_dim,2).reshape(shape=(1,-1)) / model_dim
print(i_mat) # tensor([[0.0000, 0.3333, 0.6667]])
# Formula 10000 Of i_mat Power
i_mat = torch.pow(10000, i_mat)
print(i_mat) # tensor([[ 1.0000, 21.5443, 464.1590]])
# Initialize position coding ,4 That's ok 6 The tensor of a column ,4 Represents the length of the sequence ( The number of words in a sentence ),6 Represents the number of Characteristic Columns ( The length of a word is 6 Vector representation of )
pe_embedding_tabel = torch.zeros(size=(max(src_len), model_dim))
print(pe_embedding_tabel)
# Even columns
pe_embedding_tabel[:, 0::2] = torch.sin(pos_mat / i_mat)
print(pe_embedding_tabel)
# Odd columns
pe_embedding_tabel[:, 1::2] = torch.cos(pos_mat / i_mat)
print(pe_embedding_tabel) # Complete sine and cosine position coding
# Instantiation embedding layer , In every sentence 4 Words with a length of 6 To encode
pe_embedding = nn.Embedding(num_embeddings=max(src_len), embedding_dim=model_dim)
print(pe_embedding.weight)
# rewrite embedding The weight of the layer , And the weights are not updated during training
pe_embedding.weight = nn.Parameter(pe_embedding_tabel, requires_grad=False)
print(pe_embedding.weight) # shape=[4,6]
# ==2== Location index
# Build the position index of each word in the sentence
src_pos = [torch.unsqueeze(torch.arange(max(src_len)), dim=0) for _ in src_len]
tgt_pos = [torch.unsqueeze(torch.arange(max(tgt_len)), dim=0) for _ in tgt_len]
print(src_pos, # [tensor([[0, 1, 2, 3]]), tensor([[0, 1, 2, 3]])]
tgt_pos) # [tensor([[0, 1, 2, 3]]), tensor([[0, 1, 2, 3]])]
# Change the list type to tensor type , stay axis=0 dimension concat
src_pos = torch.cat(src_pos, dim=0)
tgt_pos = torch.cat(tgt_pos, dim=0)
print(src_pos, # tensor([[0, 1, 2, 3], [0, 1, 2, 3]])
tgt_pos) # tensor([[0, 1, 2, 3], [0, 1, 2, 3]])
# Location code , The longest sentence is 4 Word , The position of each word is in the form of 6 To represent the vector of
src_pe_embedding = pe_embedding(src_pos)
tgt_pe_embedding = pe_embedding(tgt_pos)
print(src_pe_embedding.shape) # torch.Size([2, 4, 6])
print(src_pe_embedding)The method of constructing feature sequence and target sequence is the same as that in the first section , I won't go into that .
Position Embedding It is the encoding of the position index of the words in the sentence , and Word Embedding Is to code the words in the sentence .
First, initialize a 4 That's ok 6 Columns of the matrix , Where the row represents the location index , The column represents how many vectors each position is represented by . According to the formula , Odd columns use cos Function instead of , Even columns use sin Function instead of . Get the tensor after sine and cosine coding . Next Instantiation nn.Embedding(), Will be initialized randomly embedding The weight matrix of layer is changed into the weight after sine and cosine position coding , And the position weight is not updated during training . As shown in the first matrix below .
then Construct the position index of each word of the sentence in the feature sequence src_pos, Each sentence contains 4 Word , So the word location index is [0,1,2,3], among src_pos.shape = [2, 4] The representative feature sequences are 2 A sentence , Each sentence has 4 Word position index . after Position Embedding After the layer ,shape become [2, 4, 6], generation There are... In the table feature sequence 2 A sentence , Each sentence contains 4 Word position , Each word position has a length of 6 And the eigenvector of . As shown in the second matrix below .
# pe_embedding.weight ( Sine cosine position coding )
tensor([[ 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 1.0000],
[ 0.8415, 0.5403, 0.0464, 0.9989, 0.0022, 1.0000],
[ 0.9093, -0.4161, 0.0927, 0.9957, 0.0043, 1.0000],
[ 0.1411, -0.9900, 0.1388, 0.9903, 0.0065, 1.0000]])
# src_pe_embedding
tensor([[[ 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 1.0000],
[ 0.8415, 0.5403, 0.0464, 0.9989, 0.0022, 1.0000],
[ 0.9093, -0.4161, 0.0927, 0.9957, 0.0043, 1.0000],
[ 0.1411, -0.9900, 0.1388, 0.9903, 0.0065, 1.0000]],
[[ 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 1.0000],
[ 0.8415, 0.5403, 0.0464, 0.9989, 0.0022, 1.0000],
[ 0.9093, -0.4161, 0.0927, 0.9957, 0.0043, 1.0000],
[ 0.1411, -0.9900, 0.1388, 0.9903, 0.0065, 1.0000]]])3. self_attention_Mask
Here are Encoder in Muti_head_attention Medium mask Method
Because the length of each characteristic sentence is different , after padding After that, the length of each sentence is the same . In the feature sequence , The first sentence contains only 2 Word , use 1 To express , The last two filled positions are marked with 0 Value to represent the . Therefore, the feature sequence is expressed as [[1, 1, 0, 0], [1, 1, 1, 1]], Its shape=[2, 4]
Next Building adjacency matrix shape=[2, 4, 4], Among them is 4 Row sum 4 List of words , Each element in the adjacency matrix represents the corresponding relationship between two words , if 1 Represents a valid word , if 0 Represents an invalid word , It's through padding Got .
Next, just All the elements in the adjacency matrix are 0 All areas of are masked , Make the element value at this position very small .
The code is as follows :
import torch
from torch.nn import functional as F
# ------------------------------------------------------ #
# Construct a mask shape=[batch, max_src_len, max_src_len], The value is 1 Or negative infinity
# ------------------------------------------------------ #
# Specify the sequence length
src_len = torch.Tensor([2, 4]).to(torch.int32) # The length of the feature sequence is 2
# The characteristic sequence has 2 A sentence , The length of the first sentence is 2, The length of the second sentence is 4
print(src_len) # tensor([2, 4])
# Build the position of the effective encoder , Such as : The first sentence contains only 2 Word , Then only the front 2 The value of each element is 1
valid_encoder_pos = [torch.ones(L) for L in src_len]
print(valid_encoder_pos) # [tensor([1., 1.]), tensor([1., 1., 1., 1.])]
# Because each sentence is required to contain the same number of words during the training , So by padding Change the length of all characteristic sentences into the maximum effective sentence length
new_encoder_pos = [] # preservation padding The last sentence
for sent in valid_encoder_pos: # Go through each sentence
sent = F.pad(sent, pad=(0, max(src_len)-len(sent))) # Right side filling 0 Keep the sequence length as 4
sent = torch.unsqueeze(sent, dim=0) # Become a two-dimensional tensor [max_src_len]==>[1, max_src_len]
new_encoder_pos.append(sent) # preservation padding The sequence after
valid_encoder_pos = torch.cat(new_encoder_pos, dim=0)
print(valid_encoder_pos) # tensor([[1., 1., 0., 0.],[1., 1., 1., 1.]])
# [2,4] ==> [2,4,1]
valid_encoder_pos = torch.unsqueeze(valid_encoder_pos, dim=-1)
# Adjacency matrix obtains the corresponding relationship between matrices [2,4,1]@[2,1,4]==>[2,4,4]
valid_encoder_pos_matrix = torch.bmm(valid_encoder_pos, valid_encoder_pos.transpose(1,2))
print(valid_encoder_pos_matrix) # The first sentence has only two valid words , The last two words are padding,
# Get invalid matrix , by 1 The positions are padding Got , It's invalid
invalid_encoder_pos_matrix = 1 - valid_encoder_pos_matrix
# Become boolean type , True Represents an invalid area , need mask
mask_encoder_self_attention = invalid_encoder_pos_matrix.to(torch.bool)
print(mask_encoder_self_attention)
# Construct input features ,2 A sentence , Each sentence 4 Word , Each word has a length of 4 Vector representation of
score = torch.randn(2, 4, 4)
# Yes mask In Chinese, it means True The place of , Corresponding score The elements in all become very small negative numbers
masked_score = score.masked_fill(mask_encoder_self_attention, -1e10)
print(score)
print(masked_score)The first matrix below goes through padding The adjacency matrix of the characteristic sequence after ; The second matrix is a randomly generated input sequence ; The third matrix is the masked sequence , take mask The element value of becomes very small , So when calculating the cross entropy loss , after softmax After the function padding The element of becomes very small , In the process of back propagation, the overall impact on the model is small .
# Adjacency matrix ,0 The representative is after padding Rear area
tensor([[[1., 1., 0., 0.],
[1., 1., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]])
# Input characteristics of random construction score, shape=[2,4,4]
tensor([[[-0.1509, -0.2514, -0.5393, 2.0241],
[-0.1525, -1.9199, 0.6847, -1.8795],
[ 1.0322, 0.0772, 0.9992, -0.1082],
[ 1.4347, 1.4084, -0.6897, -0.2518]],
[[-0.0109, 0.0328, 1.5458, 0.9872],
[ 0.0314, -1.3659, -0.6441, -1.6444],
[-0.0487, 0.0438, 0.0576, -1.1691],
[ 0.3475, -0.1329, -1.0455, -0.9671]]])
# In the play mask After that score
tensor([[[-1.5094e-01, -2.5137e-01, -1.0000e+10, -1.0000e+10],
[-1.5255e-01, -1.9199e+00, -1.0000e+10, -1.0000e+10],
[-1.0000e+10, -1.0000e+10, -1.0000e+10, -1.0000e+10],
[-1.0000e+10, -1.0000e+10, -1.0000e+10, -1.0000e+10]],
[[-1.0883e-02, 3.2843e-02, 1.5458e+00, 9.8725e-01],
[ 3.1395e-02, -1.3659e+00, -6.4410e-01, -1.6444e+00],
[-4.8689e-02, 4.3825e-02, 5.7644e-02, -1.1691e+00],
[ 3.4751e-01, -1.3290e-01, -1.0455e+00, -9.6713e-01]]])边栏推荐
- 【kotlin】好看的弹出框、自定义弹出框(对话框)、扩展函数、菊花等待条、消息提示框
- 时间序列预测系列文章总结(代码使用方法)
- 手机办理股票开户安全性高吗?
- Leetcode detailed explanation of stack type
- Career consultation | how to answer the question of career planning during the interview?
- 职业问诊 | 面试中被问到意向薪资时,该怎么回答?
- 台式机没声音怎么样才能解决
- 微搭低代码中实现二维码生成
- Use of dynamic panels
- Get to know Alibaba cloud (Cloud Computing) - development history, technical architecture, region and availability zone!
猜你喜欢

初识阿里云(云计算)—发展历程和技术架构、地域和可用区!

论文解读(DCN)《Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering》

Ansible production environment usage scenario (7): batch deployment of elk clients

如何制作精美的图片

Basic knowledge diagram of K-line Diagram -- meaning of single K-line

台式机没声音怎么样才能解决

The love digital smart 2022 summit opens, sharing data strategy and building data-driven organization methodology

Wave picking of WMS warehouse management system module

爱数SMART 2022峰会开启,分享数据战略与建设数据驱动型组织方法论

Realization of 2D code generation in micro build low code
随机推荐
Mono's execution process
Explanation: Luogu p1762 even number /6.21 internal examination T2
Database basic notes
torch. nn. Transformer import failed
Zadig 面向开发者的自测联调子环境技术方案详解
Implementation of go language plug-in platform
code review
Windows mysql5.7 enable binlog log
Zadig officially launched vs code plug-in, making local development more efficient
00 後雲原生工程師:用 Zadig 為思創科技(廣州公交)研發開源節流
【HackTheBox】dancing(SMB)
Set when quartz scheduled task trigger starts
[SSH] login without password
Online linear programming: Dual convergence, new algorithms, and regret bounds
在产业互联网时代,传统意义上的互联网将会演变出来诸多新的形态
With the development of industrial Internet as the starting point, the industry can enter a new stage of development
Post-00 cloud native Engineer: use Zadig to increase revenue and reduce expenditure for the R & D of Sichuang Technology (Guangzhou public transport)
Redis+AOP+自定义注解实现限流
【SSH】无密码登录
穿越过后,她说多元宇宙真的存在