当前位置:网站首页>Based on LSM tree idea Net 6.0 C # write a kV database (case version)
Based on LSM tree idea Net 6.0 C # write a kV database (case version)
2022-07-27 10:14:00 【Dotnet cross platform】
It's a little long , After reading it patiently, you should be able to understand what the actual principle is .
This is a KV Database C# Realization , Currently used .NET 6.0 Realized , At present, it belongs to the embryonic form , The skeleton is complete , After all, it was just completed less than a week .
Of course , This is actually NoSQL The prototype of , It is helpful to deeply understand the internal principles and concepts of relevant databases , It also helps to get started .
Suitable for friends who are interested in the principle and implementation of database .
The overall code , Probably 1500 That's ok , The core code is about 500 That's ok .
Why should we implement a database
Probably 2018 In the year , I came up with the idea of developing a database by myself , although , Making wheels may not be as powerful as existing products , however , Few people can make it , and , There are even fewer books to build databases , Of course , It's not just that there are few books to build databases , But there are few creative books of various advanced products .
although , Now there are all kinds of open source , however , Like me with a thin foundation , You can't easily understand , The architectural design of these frameworks , And related concepts , Just look at the code , Not for a long time , It is not easy to understand the meaning of its existence .
timely , I saw 【 The fool works well 】 An article by a big man 《【 Ten thousand words long text 】 Use LSM Tree Thought realizes a KV database 》 The article touched me a lot , Let my stagnant heart , It jumped up again , Although the boss uses GO The realization of language , however , For me , Is language still a problem , As long as the technical ideas are consistent , I can use it C# Come true , That's right. 【 The fool works well 】 Big guy's salute , My side followed closely .
Of course , My own research on data also takes a long time , After all , To study anything, we must start with the principle , Three papers from Google 《GFS,MapReduce,BigTable》 Start , however , The paper , After all, it's a paper , I can't understand , I also read the articles of all kinds of big men on the Internet , Still very deceived , When it's done , No one communicates , Cause all kinds of miscarriages .
occasionally , When implementing a product framework by yourself , Always thinking , Why? BUG Let me deal with it all , Then a want to , Make a new product by yourself , Nor can we learn from the technical points , That's not from scratch , Naturally encounter one by one BUG.
The picture below is , I want to be a database , Write and draw by yourself , however , When it's actually done , Logic is not always that good , Of course , This is a relational database , It's more difficult , Let's take a look at the previous manuscript , If you have an idea, just draw it .



The implementation is a little difficult , Now the implementation is KV database , It's a columnar database , big-name HBase, The underlying database engine is LSM-Tree Technical thought of .
LSM-Tree What is it
LSM-Tree The full English name is Log Structured Merge Tree ( chinese : Log structure merge tree ), It's a kind of layering , Orderly , Disk oriented data structures , Its core idea is to make full use of , The technical feature of disk batch sequential write is much higher than random write , To realize the core of storage system with high write throughput .
specifically , The principle is for hard disk , Try to add data , Instead of writing data randomly , Appending is faster than random writing , This structure is suitable for writing more and reading less , therefore ,LSM-Tree Designed to provide more than traditional B+ Trees or ISAM Better write throughput , Achieve this performance goal by eliminating random local update operations .
Related technical products include Hbase、Cassandra、Leveldb、RocksDB、MongoDB、TiDB、Dynamodb、Cassandra 、Bookkeeper、SQLite etc.
therefore ,LSM-Tree The core of is to add data , Instead of modifying the data .
LSM-Tree Architecture analysis

In fact, this drawing has expressed the overall design idea , The main body is actually surrounded by red lines and black lines , In two parts , Red is written , Black is reading , Arrows indicate the direction of the data , Numbers indicate logical order .
The whole consists of roughly three parts , Database operation part ( Mainly for reading and writing ), Memory part ( Cache table and invariant cache table ) And hard disk (WAL Log and SSTable), These three parts .
First explain the key words
MemoryTable
Memory tables , A temporary cache data table , It can be used Implementation of binary sort tree , You can also use a dictionary , I use dictionary to realize .
WAL Log
WAL english (Write Ahead LOG) It is a kind of pre write log , Used to provide data persistence during system failure , This means that when a write request arrives , Data is first added to WAL file ( Sometimes called logs ) And refresh to the disk before updating the memory data structure .
If used Mysql, I should know BinLog file , They are a truth , First write to WAL Log in , Record it , then , Write to memory table , If the computer suddenly crashes , Something in the memory must have been lost , that , Next restart , From WAL Log In the record sheet , Restore the data to the current data state .
Immutable MemoryTable
Immutable( constant ), Compared with the memory table , It cannot write new data , Is read-only .
SSTable
SSTable english (Sorted Strings Table) , Ordered string table , Is an ordered list of strings , The advantage of using it is that it can achieve the effect of sparse index , and , Merging files is simpler and more convenient , I want to check some Key, however , It is based on Some order Key Between , You can check directly in the document , Instead of saving everything in memory .
Here I use hash table , I think it's worth wasting a little memory , After all, in order to be fast , It's worth wasting some space , therefore , At present, the full index is loaded into memory , And the data is stored in SSTable in , Of course , If it is for better design , You can also implement the ordered table by yourself to use binary search .
After I realize this convenience , Memory will load a large number of indexes , Relatively speaking, it is fast , however , Memory will be larger , Space for time .
Now let's start the specific process analysis
LSM-Tree Write Route analysis
Look at the picture below , Data write analysis

Follow the red line , Pay attention to me and never get lost .
LSM-Tree Write The first step of route analysis

First step , There are only two parts to note , They are memory table and WAL.Log
Store the memory table before writing data , In order to quickly store data in the database .
Store in WAL.Log, To prevent data loss under abnormal conditions .
Under normal circumstances , Write to WAL.Log One copy , then , Will be written to memory .
When the program crashes , perhaps , The computer is out of power , After repeated service , It will load first WAL.Log, In order from beginning to end , Restore data to memory table , Until the end , Back to WAL.Log The final state , That is, the final state of the memory table data .
notes
What we should pay attention to here is , When the following invariant table (Immutable MemoryTable) Write to SSTable When , It will be emptied WAL.Log file , And write the data of the memory table directly to the WAL.log In the table .
LSM-Tree Write The second step of route analysis

The second step , Relatively simple , In the memory table count When it is greater than a certain number , Just add a memory table at the same time , Turn it into Immutable MemoryTable ( Invariant table ), wait for SSTable The operation of dropping the plate , This is the time ,Immutable MemoryTable There will be multiple tables .
LSM-Tree Write The third step of route analysis

The third step , The database will check regularly Immutable MemoryTable ( Invariant table ) Whether the invariant table exists , If there is , It will fall directly to SSTable surface , No matter how much there is in the current memory Immutable MemoryTable ( Invariant table ).
By default, it falls from the first level of memory SSTable All are Level 0, then , Built in the current time , So it's a two-level sort , First grade , then , Time .
LSM-Tree Write The fourth step of route analysis

Step four , In fact, it is segment merging or level merging compression , It is judgement. level0 All of this level SSTable file (SSTable0,SSTable1,SSTable2), Judge their total size or their total number , Do they need to be merged .
among Level 0 If the size is 10M, that ,Level 1 The size of 100M, And so on .
When Level0 All of the SSTable The file exceeds 10M, Or a limited size , Will start from WAL.Log Sequential thinking , Re merge into a large file , Traverse and merge the old data first and then the new data , If you have deleted , Then it is directly excluded , Keep only the latest status .
If Level1 Of ( All SSTable) size exceed 100M, that , Trigger Level1 Contraction of , The execution process follows Level0 Same operation , Just different levels .
The advantage of this compression is to minimize the amount of data and files , After all , There are many documents , Management is not very convenient .
thus , The written route has been analyzed
notes
When querying , New data first , Post old data , When merging and compressing by segments , Old data should be at the bottom first , New data brush status , This is a point to pay attention to when implementing .
LSM-Tree Read Route analysis

This is the process of finding data , Follow the black line and the number mark , It's easy to see the access order
1. MemoryTable ( Memory tables )
2. Immutable MemoryTable ( Invariant table )
3. Level 0-N (SSTableN-SSTable1-SSTable0) ( Ordered string table )
Basically, these three parts , And the level table is from 0 Level began to look down , And inside each level SSTable It's from the new to the old , Find and return , Regardless of key Is it deleted or normal .
LSM-Tree Architecture analysis and implementation

The core idea :
In fact, it is a chronological record table , Each operation will be recorded , It is equivalent to a message queue , Record a series of actions , then , Playback action , Get the latest data status , Also similar CQRS Medium Event Store( The event store ), The concept is the same , So when it comes to implementation , Just understand what essence it is .
Wal.log and SSTable, All to ensure that the data can be ground and persistent without loss , and MemoryTable, The concept of temporary cache , Of course , It also has the function of accelerating access .
therefore , From these points of view , It is divided into the following major objects
1. Database database ( Play a right role Wal.log,SSTable and MemoryTable Management responsibilities )
2. Wal.log( Record temporary data log )
3. MemoryTable( Record data to memory , At the same time, it provides interface services for the database search function )
4. SSTable( management SSTable file , And provide SSTable Query function of )
therefore , Design relevant class interface design for these objects .
KeyValue ( Structure of specific data )
When designing , First design the structure of the actual data , I designed it like this
There are three main messages ,key, DataValue,Deleted , among DataValue yes Byte[] Type of , If I write it in the file , It is written directly .
/// <summary>
/// Data and information kv
/// </summary>
public class KeyValue
{
public string Key { get; set; }
public byte[] DataValue { get; set; }
public bool Deleted { get; set; }
private object Value;
public KeyValue() { }
public KeyValue(string key, object value, bool Deleted = false)
{
Key = key;
Value = value;
DataValue = value.AsBytes();
this.Deleted = Deleted;
}
public KeyValue(string key, byte[] dataValue, bool deleted)
{
Key = key;
DataValue = dataValue;
Deleted = deleted;
}
/// <summary>
/// Is there valid data , Non deletion status
/// </summary>
/// <returns></returns>
public bool IsSuccess()
{
return !Deleted || DataValue != null;
}
/// <summary>
/// The value does not exist , Whether delete or not
/// </summary>
/// <returns></returns>
public bool IsExist()
{
if (DataValue != null && !Deleted || DataValue == null && Deleted)
{
return true;
}
return false;
}
public T Get<T>() where T : class
{
if (Value == null)
{
Value = DataValue.AsObject<T>();
}
return (T)Value;
}
public static KeyValue Null = new KeyValue() { DataValue = null };
}IDataBase ( Database interface )
The main body class for external interaction , The database class , Add delete check interface , Use both get,set,delete performance .
/// <summary>
/// Database interface
/// </summary>
public interface IDataBase : IDisposable
{
/// <summary>
/// Database configuration
/// </summary>
IDataBaseConfig DataBaseConfig { get; }
/// <summary>
/// get data
/// </summary>
KeyValue Get(string key);
/// <summary>
/// Save the data ( Or update the data )
/// </summary>
bool Set(KeyValue keyValue);
/// <summary>
/// Save the data ( Or update the data )
/// </summary>
bool Set(string key, object value);
/// <summary>
/// Access to all key
/// </summary>
List<string> GetKeys();
/// <summary>
/// Delete specified data , And return the existing data
/// </summary>
KeyValue DeleteAndGet(string key);
/// <summary>
/// Delete data
/// </summary>
void Delete(string key);
/// <summary>
/// Timing check
/// </summary>
void Check(object state);
/// <summary>
/// Clear all data in the database
/// </summary>
void Clear();
}IDataBase.Check ( Regular inspection )
This is a regular check Immutable MemoryTable( Invariant table ) The timing operation of , Mainly depends on IDataBaseConfig.CheckInterval Parameter configures its trigger interval .
Its responsibility is to check the memory table and check SSTable Whether to trigger the operation of segmented merge compression .
public void Check(object state)
{
//Log.Info($" Regular heartbeat check !");
if (IsProcess)
{
return;
}
if (ClearState)
{
return;
}
try
{
Stopwatch stopwatch = Stopwatch.StartNew();
IsProcess = true;
checkMemory();
TableManage.Check();
stopwatch.Stop();
GC.Collect();
Log.Info($" Timed heartbeat processing takes time :{stopwatch.ElapsedMilliseconds} millisecond ");
}
finally
{
IsProcess = false;
}
}IDataBaseConfig ( Database configuration file )
Profile of the database , Where is the database stored , And generate SSTable Threshold configuration at , There is also the configuration of detection interval .
/// <summary>
/// Database related configuration
/// </summary>
public interface IDataBaseConfig
{
/// <summary>
/// Database data directory
/// </summary>
public string DataDir { get; set; }
/// <summary>
/// 0 Layer of all SsTable The maximum sum of file sizes , Company MB, Beyond this value , This layer SsTable Will be compressed to the next layer
/// The data size of each layer is that of the upper layer N times
/// </summary>
public int Level0Size { get; set; }
/// <summary>
/// Multiple between layers
/// </summary>
public int LevelMultiple { get; set; }
/// <summary>
/// Threshold number per layer
/// </summary>
public int LevelCount { get; set; }
/// <summary>
/// Of the memory table kv The largest number , Beyond this threshold , The memory table will be saved to SsTable in
/// </summary>
public int MemoryTableCount { get; set; }
/// <summary>
/// Compress memory 、 File interval , How often is the inspection done
/// </summary>
public int CheckInterval { get; set; }
}IMemoryTable ( Memory tables )

This table is actually a management table for memory data , Mainly management MemoryTableValue object , This object is implemented through hash dictionary , Of course , You can also choose other structures , Such as ordered binary tree .
/// <summary>
/// Memory tables ( Sort tree , Binary tree )
/// </summary>
public interface IMemoryTable : IDisposable
{
IDataBaseConfig DataBaseConfig { get; }
/// <summary>
/// Get total
/// </summary>
int GetCount();
/// <summary>
/// Search for ( From new to old , From big to small )
/// </summary>
KeyValue Search(string key);
/// <summary>
/// Set new value
/// </summary>
void Set(KeyValue keyValue);
/// <summary>
/// Delete key
/// </summary>
void Delete(KeyValue keyValue);
/// <summary>
/// Get all key Data list
/// </summary>
/// <returns></returns>
IList<string> GetKeys();
/// <summary>
/// Get all the data
/// </summary>
/// <returns></returns>
(List<KeyValue> keyValues, List<long> times) GetKeyValues(bool Immutable);
/// <summary>
/// Get the number of invariant tables
/// </summary>
/// <returns></returns>
int GetImmutableTableCount();
/// <summary>
/// Start swapping
/// </summary>
void Swap(List<long> times);
/// <summary>
/// Clear all the data
/// </summary>
void Clear();
}MemoryTableValue ( Object implementation )
Mainly through Immutable This attribute implements the marking of immutable memory tables , The specific implementation is through judgment IDataBaseConfig.MemoryTableCount ( Of the memory table kv The largest number ) To realize the marked .
public class MemoryTableValue : IDisposable
{
public long Time { get; set; } = IDHelper.MarkID();
/// <summary>
/// Is it immutable
/// </summary>
public bool Immutable { get; set; } = false;
/// <summary>
/// data
/// </summary>
public Dictionary<string, KeyValue> Dic { get; set; } = new();
public void Dispose()
{
Dic.Clear();
}
public override string ToString()
{
return $"Time {Time} Immutable:{Immutable}";
}
}What time table state changes to Immutable MemoryTable( Invariant table ) Of
What I realize here is from Set At the entrance of , If the number is greater than IDataBaseConfig.MemoryTableCount ( Of the memory table kv The largest number ) Just change its state
public void Check()
{
if (CurrentMemoryTable.Dic.Count() >= DataBaseConfig.MemoryTableCount)
{
var value = new MemoryTableValue();
dics.Add(value.Time, value);
CurrentMemoryTable.Immutable = true;
}
}IWalLog
wallog, It's much simpler , Just put KeyValue Write to file , In order to ensure WalLog Continued writing , therefore , The handle of this file is reserved inside the object . and SSTable, There is no need , Read at any time .
/// <summary>
/// journal
/// </summary>
public interface IWalLog : IDisposable
{
/// <summary>
/// Database configuration
/// </summary>
IDataBaseConfig DataBaseConfig { get; }
/// <summary>
/// load Wal Log to memory table
/// </summary>
/// <returns></returns>
IMemoryTable LoadToMemory();
/// <summary>
/// Write the log
/// </summary>
void Write(KeyValue data);
/// <summary>
/// Write the log
/// </summary>
void Write(List<KeyValue> data);
/// <summary>
/// Reset log file
/// </summary>
void Reset();
}ITableManage (SSTable The management of the table )
For better management SSTable, There needs to be a management , This interface is its management , among SSTable There will be multiple layers , Each time Level+ Time stamp +db As the file name , For external identification .
/// <summary>
/// Table management items
/// </summary>
public interface ITableManage : IDisposable
{
IDataBaseConfig DataBaseConfig { get; }
/// <summary>
/// Search for ( From new to old , From big to small )
/// </summary>
KeyValue Search(string key);
/// <summary>
/// Access to all key
/// </summary>
List<string> GetKeys();
/// <summary>
/// Check the database file , If the file is invalid, there is too much data , Will trigger the integration of files
/// </summary>
void Check();
/// <summary>
/// Create a new Table
/// </summary>
void CreateNewTable(List<KeyValue> values, int Level = 0);
/// <summary>
/// Clean up a certain level of data
/// </summary>
/// <param name="Level"></param>
public void Remove(int Level);
/// <summary>
/// Clear data
/// </summary>
public void Clear();
}ISSTable(SSTable file )
SSTable Content management for , It should be LSM-Tree The core. , Data consolidation , And data query , write in , load , They are all low-level operations , You need a lot of database knowledge .
/// <summary>
/// File information table ( Stored in IO in )
/// Metadata | Index list | Data area ( Data modification will only add , And modify the index list data )
/// </summary>
public interface ISSTable : IDisposable
{
/// <summary>
/// Data address
/// </summary>
public string TableFilePath();
/// <summary>
/// Rewrite file
/// </summary>
public void Write(List<KeyValue> values, int Level = 0);
/// <summary>
/// Data location
/// </summary>
public Dictionary<string, DataPosition> DataPositions { get; }
/// <summary>
/// Get total
/// </summary>
/// <returns></returns>
public int Count { get; }
/// <summary>
/// Metadata
/// </summary>
public ITableMetaInfo FileTableMetaInfo { get; }
/// <summary>
/// Query data
/// </summary>
/// <param name="key"></param>
/// <returns></returns>
public KeyValue Search(string key);
/// <summary>
/// Orderly key list
/// </summary>
/// <returns></returns>
public List<string> SortIndexs();
/// <summary>
/// To obtain position
/// </summary>
DataPosition GetDataPosition(string key);
/// <summary>
/// Read the value of a position
/// </summary>
public object ReadValue(DataPosition position);
/// <summary>
/// Load all the data
/// </summary>
/// <returns></returns>
public List<KeyValue> ReadAll(bool incloudDeleted = true);
/// <summary>
/// Get all keys
/// </summary>
/// <returns></returns>
public List<string> GetKeys();
/// <summary>
/// Get table name
/// </summary>
/// <returns></returns>
public long FileTableName();
/// <summary>
/// File size
/// </summary>
/// <returns></returns>
public long FileBytes { get; }
/// <summary>
/// Get levels
/// </summary>
public int GetLevel();
}IDataPosition( Data sparse index is )
Convenient data query is convenient and convenient from SSTable Read the actual data content in .
/// <summary>
/// The location of the data
/// </summary>
public interface IDataPosition
{
/// <summary>
/// Index start position
/// </summary>
public long IndexStart { get; set; }
/// <summary>
/// Start address
/// </summary>
public long Start { get; set; }
/// <summary>
/// Data length
/// </summary>
public long Length { get; set; }
/// <summary>
/// key The length of
/// </summary>
public long KeyLength { get; set; }
/// <summary>
/// Whether it has been deleted
/// </summary>
public bool Deleted { get; set; }
public byte[] GetBytes();
}Data structure analysis
Needless to say, the structure of the internal table , It's simple , Is a hash dictionary , There are two structures that need to be specifically analyzed , That's it WALLog and SSTable file .
WALLog structural analysis

This picture is difficult to draw horizontally , I drew it vertically ,WalLog What is stored in it is the chronological KeyValue data , When it is loaded into Memory Table When , In fact, it is superimposed to the final state in the order of the numbers I marked .
Empathy ,SSTable When data is segmented, merged and compressed , In fact, it is the same principle .
SSTable structural analysis

SSTable, It itself is a file The name is roughly as follows :
0_16586442986880000.db
The format is Hierarchy _ Time stamp .db The naming rules are made in this way , For this reason, I also made a generation time sequence without repetition ID Simple algorithm .
SSTable Data area

The data area is very simple , hold KeyValue.DataValue direct ToJson That's all right. , then , Write files directly .
SSTable Sparse index area

This area corresponds to the data area key Written in order , It's mainly about DataValue The corresponding start address and end address are put into this data area , Another one key Also written in .
The advantage is to , When this SSTable Load index (IDataPosition) To the memory , Save loading the contents of the data area , It's much easier to find , This is also the function of index .
Metadata area

According to the agreement , It belongs to the agreement header , But why put it at the back , In fact, it is for the convenience of calculation , This is also a small coup .
It not only includes the beginning and end of the data area , Start and end of sparse index area , It also contains , this SSTable Version and creation time of , And current SSTable At the same level .
SSTable Segment merge compression
When I first read this functional logic , The brain is muddled , I watched it hard for a long time , Analyzed for a long time , I still wrote it out , I didn't understand at first , Later I understood , It's much easier to write .
Look at the picture below :

In fact, consolidation is stateful , This is the intermediate state , I put him in the middle of the picture , then , It is indicated by a white dashed box .
The whole logic is , First, generate the invariant table from memory at a fixed time into 0 Class SSTable, then ,0 There will be many files at level , If these file sizes exceed the threshold , Merge this level of file into a large file , according to WalLog The principle of merging , Then write the information back to the local as 1 level SSTable that will do .
And so on .
The following dynamic diagram illustrates the merging effect .

This dynamic diagram also shows some things , There is this picture , It is estimated that you will know more about the principle .
LSMDatabase Performance testing
At present, my test cases are quite simple , If there is bug, Just change it directly . My test here is , Write onemillion pieces of data directly , The test results are as follows :
keyvalue Data length :151 Actual file size :217 MB Insert 1000000 Data Time consuming :79320 millisecond or 79.3207623 second , Average inserts per second :52631 strip
keyvalue Data length :151 Actual file size :221 MB Insert 1000000 Data Time consuming :27561 millisecond or 27.5616519 second , Average inserts per second :37037 strip
1. keyvalue Data length :176
2. Actual file size :215 MB
3. Insert 1000000 Data Time consuming :29545 millisecond or 29.5457999 second ,
4. Average inserts per second :34482 strip or 30373 etc. ( Different configuration , The environment is different , It will be different , But roughly )
5. The length of data inserted multiple times is different , Different configurations , The insertion speed will be affected
load 215 MB 1000000 Data bar data Time consuming :2322 millisecond , That is to say 2 second ( load SSTable)
The memory is occupied after it is stable 500MB about .
Stable query takes time : Hundreds of queries take an average of : 0 millisecond . Maybe it's because of the dictionary , The query speed will be faster , however , Special point query will have 0.300 About the time-consuming individual phenomenon .
Inquire about keys, A million time consuming 3 second , This is a little time-consuming , It should be that the amount of data is too large .




thus , This project is over , although , Have not experienced stress testing , however , The overall skeleton and content are complete , It can be repaired and improved according to specific conditions . At present, there is nothing wrong with me .
summary
Everything starts hard , Across the river of time , Step by step , It was born today , Yes, it is , that , It will be much easier to deal with the next related problem , I call such barriers , The barrier of knowledge .
Have one's view of the important overshadowed by the trivial , It really exists , How to break through , Only curiosity , Hold the line , Dig a little .
Reference material
【 Ten thousand words long text 】 Use LSM Tree Thought realizes a KV database
https://www.cnblogs.com/whuanle/p/16297025.html
Xiao Hansong :《 from 0 Start :500 Line code implementation LSM database 》
cstack : Let's build a simple database
https://cstack.github.io/db_tutorial/
Database kernel - One hour to achieve a basic function of the database
https://www.jianshu.com/p/76e5cb53c864
Google's three papers GFS,MapReduce,BigTable Medium GFS and BigTable
Thank you list
1. The fool works well
2. Todd
Although there is not much in-depth communication with the above leaders , After all, the coffee level is still a little high , however , Through articles and simple communication , Let me further study the database , Even the real one , Thank you again for .
Code address
https://github.com/kesshei/LSMDatabaseDemo.git
https://gitee.com/kesshei/LSMDatabaseDemo.git
reading
One button three times !, Thank you for your support , Your support is my motivation !
Copyright
Lanchuang elite team ( The official account is the same name ,CSDN The same name )
边栏推荐
- Snowflake vs. databricks who is better? The latest war report in 2022
- WGAN、WGAN-GP、BigGAN
- Shell read read console input, use of read
- Qt 学习(二) —— .pro文件详解
- Ant高级-path和fileset
- Anaconda installation (very detailed)
- hdu5288(OO’s Sequence)
- Failure of CUDA installation nsight visual studio edition failed
- GBase 8a MPP集群扩容实战
- NVIDIA geforce experience login error: the verifier failed to load. Please check your browser settings, such as the advertisement interceptor (solution)
猜你喜欢

After one year, the paper was finally accepted by the international summit

线代003

Redis 为什么这么快?Redis 的线程模型与 Redis 多线程

Interview JD T5, was pressed on the ground friction, who knows what I experienced?

Open3d library installation, CONDA common instructions, importing open3d times this error solving environment: failed with initial frozen solve Retrying w

Shell integrated application cases, archiving files, sending messages

Xiandai 003

备战金九银十Android面试准备(含面试全流程,面试准备工作面试题和资料等)

Dcgan paper improvements + simplified code

Leetcode.814. binary tree pruning____ DFS
随机推荐
Ant advanced task
Excellent Kalman filter detailed article
[scm] source code management - lock of perforce branch
Anaconda安装(非常详细)
Why is redis so fast? Redis threading model and redis multithreading
QT learning (II) -.Pro file explanation
Learn typescript (1)
Summary of engineering material knowledge points (full)
Overview of PCL modules (1.6)
LeetCode.814. 二叉树剪枝____DFS
Acl2021 best paper released, from ByteDance
Review of in vivo detection
Matlab-基于短时神经网络的声音分类
数据分析如何解决商业问题?这里有份超详细攻略
Leetcode.565. array nesting____ Violent dfs- > pruning dfs- > in situ modification
Reason for pilot importerror: cannot import name 'pilot_ Version 'from' PIL ', how to install pilot < 7.0.0
Snowflake vs. Databricks谁更胜一筹?2022年最新战报
Pyautogui实现自动化办公-RPA小case
数据库性能系列之子查询
Color segmentation using kmeans clustering