当前位置:网站首页>In depth learning autumn recruitment interview questions collection (1)
In depth learning autumn recruitment interview questions collection (1)
2022-07-07 11:25:00 【qinjianhuang】
The interview questions in this part include C++ Basic knowledge of 、python Basics 、 Probability correlation 、 Related to intelligence problems 、 Algorithm related and deep learning related . It will continue to be supplemented in the future , Welcome to !
C++ Background development interview FAQ summary
Q1 : C++ Analysis of virtual function table .
A1 : CSDN
Q2 : C++ Function and principle analysis of virtual destructor in .
A2 : CSDN
Q3 : Structure (struct) And consortia (union) The difference between .
A3 : CSDN
Q4 : Define and Const Define the difference between constants .
A4 : CSDN
Q5 : C++ STL.
A5 : CSDN
Q6 : The concept of red black tree .
A6 : CSDN
Q7 : The difference between overloading and rewriting .
A7 : CSDN
Q8 : Value passed 、 Pointer passing 、 Quote delivery details .
A8 : CSDN
Q9 : The difference between a pointer and a reference .
A9 : CSDN
Q10 : When function returns , Return value 、 The difference between a return reference and a return pointer .
Q11 : Can virtual functions be inline ?
Q12 : vector and list The difference between .
A12 : CSDN
Q13 : C++ Medium export Key words explanation .
A13 : CSDN
Q14 : The role of private constructors .
A14 : CSDN
Q15 : In what scenarios will friend functions be used ?
A15 : CSDN
Q16 : The difference between virtual function and pure virtual function ?
A16 : CSDN
Q17 : C++ Memory allocation details .
A17 : CSDN
Q1 : Python The core underlying principles of dictionaries .
A1 : Python Explain the core underlying principles of the dictionary
Q2 : __new__()
And __init__()
difference .
A2 : CSDN
Q3 : Python Sorting function in sort() and sorted() The difference between .
A3 : CSDN
Q1 : Use Newton iterative method to write the square root function sqrt.
A1 : CSDN
Q2 : Second order optimization method .
Q3 : Fast scheduling algorithm and its optimization .
A3: CSDN
Q4 : Preorder traversal of two tree .
A4 : zhihu
Q5 : The longest subsequence .
Q6 : Flip binary tree .
A6 : zhihu
Q7 : Algorithms and bit operations in data structures .
A7 : zhihu
Q8 : The problem of greed ( The boat crossed the river ).
A8 : CSDN
Q9 : Longest text substring & The longest palindrome subsequence ( Dynamic programming ).
A9 : CSDN
Q10 : Subtracting large numbers ( According to the ancestral questions ).
A10 : CSDN
Q11 : Calculate the number of all paths from the upper left corner to the lower right corner of the two-dimensional array and the shortest one , How much if there are obstacles .
A11 : CSDN | Go to the (m,n) Location needs m+n-2 Step , Choose from n-1 Step right , Is the combinatorial number C m + n − 2 n − 1 C^{_{m+n-2}^{n-1}} Cm+n−2n−1
Q12 : Edit distance and edit distance algorithm ( Dynamic programming ).
A12 : CSDN
Q13 : Huge amounts of data .
A13 : CSDN
Q14 : Find the middle element of the linked list with unknown length .
A14 : Set two pointers ,p1,p2, Start p1,p2 Are located at the head of the linked list .p1 Two steps at a time ,p2 One step at a time ; When p1 At the end of the list ,p2 The location is the middle element of the linked list .
Q15 : Delete the duplicate nodes in the ordered linked list .
A15 : CSDN
Q16 : Find the elements in the array that appear more than half the length of the array .
A16 : CSDN
Q17 : use 1*3 The tiles are closely paved 3*20 There are several ways of flooring ?(1278)
A17 : Cattle from
Q18 : Print matrix clockwise .
A18 : CSDN
Q19 : The string moves left in a loop .
A19 : CSDN
Q20 : Given a string s, You can delete some characters from it , Make the remaining string a palindrome string . How to delete to make the palindrome string longest ?
A20 : CSDN
Q21 : Order array de duplication .
A21 : CSDN
Q22 : The maximum cumulative sum of subarrays .
A22 : CSDN
Q23 : Exchange without the aid of a third variable a,b Two variable values .
A23 : $a = a + b; b = a - b; a = a - b; $
Q24 : Comparison of several common sorting algorithms .
A24 : CSDN
Q25 : 42 Billion QQ,O(1) Time complexity to complete the search .
A25 : CSDN
Q26 : There are lines n A square , Use red (Red)、 powder (Pink)、 green (Green) Three colors on each grid , Color each grid , It is required that any adjacent squares should not be of the same color , And the first and the last two squares are also different colors . Find all the painting methods that meet the requirements .
A26 : Consider the n-1 Lattice :
If this grid is the same as 1 The color of each grid is different , So the first n There are only 1 A choice , front n-1 The choice of a grid is A(n-1), here n The choice of a grid is 1*A(n-1)
If this grid is the same as 1 The two squares are the same color , So the first n There are only 2 A choice , front n-2 The choice of a grid is A(n-2), here n When selecting a grid 2*A(n-2)
So there is A(n)=2*A(n-2)+A(n-1), n>=4
remak: Because we are considering n-1 Lattice , The grid and the 1 The color of each grid may be the same or different , therefore n>=4 Can only be . Otherwise n=3 Words , The first n-1=3-1=2 The color of the first grid and the first grid must be different , There is no such thing as this 2 That's the case , So from the n>=4 Start deducing .
M The number of colors ,F(n) Is the number of schemes
F(n)=(m-2)(F(n)-1)+(m-1)(F(n)-2)
A(1) = 3
A(2) = 6 //A(3,2)=6
A(3) = 6 //A(3,3)=6
A(n)= A(n-1)+ 2*A(n-2), n>=4
Q27 : Given integer n and m, take 1 To n the n The integers are arranged in lexicographical order , Seeking the second m Number .
A27 :
Since it's a dictionary order , So it's natural , We can consider using a dictionary tree to implement . however , There is no need to actually generate this dictionary tree , You only need to calculate the number of nodes in the corresponding branch . Calculate the number of branch nodes , So it's very simple , The number of nodes is the parent node *10, Total number of nodes = 1 + (1 * 10) + (1 * 10 * 10) + (1 * 10 * 10 * 10) + … .
Now that you know how to calculate the number of nodes in the branch of the dictionary tree , If you want to know the second m What is the number , So that's looking for the first m Nodes , First of all, from the 1 Start , If 1 Number of nodes in the branch >m, So the first m The number must be based on 1 start , Further search its child nodes , When searching for child nodes, you don't have to search 1 了 , So search 1 The second of the branch m-1 Nodes . If 1 Number of nodes in the branch <m, Then the number you are looking for is definitely not 1 start , So start searching 2 Branch , stay 2 In the branch , The number you are looking for should be (m-(1 Number of branch nodes )) Number . Repeat the process , Or search for child nodes , Or search brother nodes , Until the end m==0, That is, the current node is the node to search .
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
while (sc.hasNext()) {
long n = sc.nextLong();
long m = sc.nextLong();
System.out.println(solve(n, m));
}
}
private static long solve(long n, long m) {
long ans = 1;
while (m != 0) {
long cnt = getCntOfPre(ans, n);
if(cnt >= m) {
m --;
if(m == 0)
break;
ans *= 10;
} else {
m -= cnt;
ans ++;
}
}
return ans;
}
// Calculate with pre Starts with and is less than n Number of digits of
private static long getCntOfPre(long pre, long n) {
long cnt = 1;
long p = 10;
for (; pre * p <= n; p *= 10) {
if (pre * p - 1 + p < n)
cnt += p;
else
cnt += n - pre * p + 1;
}
return cnt;
}
}
Q1 : use 10 A white mouse looks for 1000 The poisonous one in the bottle of medicine ( Only one bottle ).
A1 : CSDN
Q2 : Carter LAN number .
A2 : anonymous
Q3 : The probability of taking three points from a circle to form an acute triangle .(1/4)
A3 : anonymous
Q4 : There are two uneven scents , It takes an hour to burn one , How to use two joss sticks to determine a period 15 Minutes of time ?
A4 : Ignite both ends of the first incense , At the same time, one end of the second incense was lit , Half an hour passed when the first incense was burned , At this time, ignite the other end of the second root , The time between the first root burning and the second root burning is 15 minute .
Q5 : 10 Bottle medicine , Every bottle of medicine has 10 star , One of the bottles is broken , The quality is much higher than normal 0.1g, How to weigh out bad medicine at one time .
A5 : Respectively from the 1-10 Take from bottle No 1 To 10 Together, the stars are called , See how many grams heavier than normal .
Q6 : There are three lights in a room , There are three switches outside , How to determine which switch corresponds to which lamp when you only go in once .
A6 : A switch is turned on for a period of time and then off , Then turn on the other switch , Go to the room , Touch it , The one with temperature is the first one to open , On the second time , Another light can also judge .
Q7 : 50 Red balls and 50 Blue ball , Put two boxes , How to place it to maximize the probability of getting the red ball ?
A7 : One box for 1 A red ball , The other put 49 Red balls and 50 Blue ball The probability of getting the red ball =0.5+0.5*(49/99) About equal to 0.75.
Q8 : A wooden stick is cut into three sections , What's the probability of forming a triangle ?(1/4)
A8 : anonymous
Q9 : To a blind man 52 A playing card , And tell him there happens to be 10 The card is face up . Ask the blind man to divide the cards into two piles , Make the number of cards in each stack face up the same .
A9 : Divide the playing cards into two piles , a pile 10 Zhang , a pile 42 Zhang . then , Turn over all the cards in the small pile .( hypothesis 10 Zhang Zhong , All face up ,42 There is no upward in Zhang Zhong , It will be 10 A flip up , It turns out that neither side is up . if 10 Zhang Zhong 9 Zhang Xiangshang ,42 Zhang Zhong has only 1 Zhang upward , It will be 10 A flip up , It turns out that each side has an upward , And so on .)
Q10 : Yes n Copy of the data , however n I don't know the length of , Design a sampling algorithm , So that the probability of each selection is the same .
A10 : First, select the first data as the candidate data , With probability 1/2 Replace the current candidate with the second data , With 1/3 Choose to replace the current candidate with three data , By analogy . such , The first x The probability that the data is the final selected data = x The probability of being selected * (x+1 The probability of not being selected ) * (x+2 The probability of not being selected ) …(n The probability of not being selected ). for instance , The first 2 The probability of data being selected is :1/2 *(2/3 * 3/4 * 4/5 … n-1/n) = 1 / n.
Q11 : n individual [0,n) Number of numbers , Find the number of occurrences of each number ( Can't open up extra space ).
A11 : The key here is to see clearly the meaning of the question ,n Number , Then there is the left closed right open interval , That is to say, each number will not be greater than or equal to n, The main ideas are as follows :
1) If we give the number under an index no matter how many n, So this number pair n If you take the rest , We can know what this number used to be ;
2) On the other hand , If a number appears once , Let's add... To the number under the corresponding index position n, Then each number corresponds to the number pair at the index position n If you take business , Is the number of times this number appears . In this way, no extra space is opened up .
public class timeDemo {
public static void main(String[] args){
int[] arr = {0,1,3,4,3,2,5,4,3,7,3,2,4,6};
int n = 8;
for(int i = 0;i<arr.length;i++){
arr[arr[i] % n] += n;
}
for(int i = 0;i<n;i++){
System.out.println(i + ":" + arr[i] / n);
}
}
}
Q12 : There is a known rand7() Function of , return 1 To 7 Random natural numbers , How to use this rand7() structure rand10(), Random 1~10.
A12 : The main principle of generating random numbers is that the probability of each number is equal , If you can get a set of numbers with equal probability , Then you can find the mapping to 1~10 Methods .
rand7() return 1~7 The natural number of , Construct a new function (rand7()-1)*7 + rand7(), This function will be generated randomly 1-49 The natural number of . as a result of 1-49 Each number in has a unique first rand7() And the second rand7() The value of represents , So the probability of their appearance is equal , Are all 1/49.
But there are too many numbers here , It can be discarded 41-49 The number of , hold 1-40 The numbers are divided into 10 Group , Each group maps to 1-10 One of them , So you can get random results .
The way to do it is , utilize (rand7()-1)*7 + rand7() Generate random numbers x, If it is greater than 40 Then continue to be random until less than or equal to 40 until , If less than or equal to 40, Then the random number generated is (x-1)/4+1.
Q13 : There was a couple , I have two children , One of the children is a girl , Ask the probability that the other child is a boy ?(2/3)
A13 : There are four possibilities for the gender of two children :( Man )( men and women )( Woman and man )( Woman ), One of them is a girl , It excludes ( Man ), There are three cases left . The other one is a boy, which accounts for two kinds , So the answer is 2/3. The reason why the answer is not 1/2, It is because it is uncertain whether a girl is the first or the second to be born .
Geometric distribution :
- The number of times to do something ( Also called test times ) Is constant , use n Express .( for example , Flip a coin 3 Time , To marry him 101 Time )
- Every event has two possible outcomes ( success , Or failure ).( for example , The proposal was accepted ( success ), The proposal was rejected ( Failure ))
- every time “ success ” The probabilities are all equal , The probability of success is p Express .
- This is what distinguishes it from binomial distribution , The problem of solving binomial distribution is success x Time Probability . The problem of solving geometric distribution becomes test x Time , To achieve their first success Probability .
Probability of occurrence of geometric distribution : p ( x ) = ( 1 − p ) x − 1 p {p}(x)=(1-p)^{x-1} p p(x)=(1−p)x−1p, expect : 1 p \frac{1}{p} p1
Hypergeometric distribution :
A kind of practical problems are often encountered in product sampling inspection , Assume that N There are M Pieces of nonconforming products , Randomly select... From the product n Check the parts , Find out k The probability of nonconforming products is :
P ( X = k ) = C M k C N − M n − k C N n P(X=k)=\frac{C_{M}^{k}C_{N-M}^{n-k}}{C_{N}^{n}} P(X=k)=CNnCMkCN−Mn−k
- The difference between hypergeometric distribution and binomial distribution : Hypergeometric distribution needs to know the total capacity , The binomial distribution does not need ;
- Hypergeometric distribution is not put back to extraction , The binomial distribution is put back to extract ( Independent repetition ) When the overall capacity is very large , Hypergeometric distribution is approximate to binomial distribution .
Bernoulli distribution :
Bernoulli distribution (Bernoulli distribution) Also known as two-point distribution or 0-1 Distribution . Bernoulli distribution is a discrete probability distribution , The probability density function is :
f ( x ) = p x ( 1 − p ) 1 − x = { p if x = 1 1 − p if x = 0 0 otherwise f(x)=p^{x}(1-p)^{1-x}=\left\{\begin{array}{ll}{p} & {\text { if } x=1} \\ {1-p} & {\text { if } x=0} \\ {0} & {\text { otherwise }}\end{array}\right. f(x)=px(1−p)1−x=⎩⎨⎧p1−p0 if x=1 if x=0 otherwise
The binomial distribution :
The binomial distribution (Binomial distribution) yes n Discrete probability distribution of success times of heavy Bernoulli test .
P { X = k } = C n k p k ( 1 − p ) n − k P\{X=k\}=C_{n}^{k} p^{k}(1-p)^{n-k} P{ X=k}=Cnkpk(1−p)n−k
expect : E ( X ) = n p E(X)=np E(X)=np, variance : D ( X ) = n p ( 1 − p ) D(X)=np(1-p) D(X)=np(1−p)
Several commonly used methods for calculating the distance between two probability distributions and python Realization
Q1 : bagging vs boosting vs Random forests vs GBDT.
A1 : CSDN
Q2 : A solution to a local optimal problem .
A2 :
Use random gradient descent instead of real gradient descent . It can be understood in this way , Each time you fumble for a single data sample , In essence, it is groping forward on the error surface formed by a sample , The surface of each sample is roughly similar , It's different , When you fall into a pit , It can often be pulled out by other surfaces .
Set impulse . A man is his name , The pace of this advance , According to the last step , Turn it up properly , Like a stone falling from a height , There is a better chance of crossing some small pits , If the pit is very large , It is impossible to escape by the inertia of impulse .
Different initial weights for training . Suppose the error surface is a pothole surface , We try to land at a random starting point for the first time , And then start groping forward , Maybe there will be a lucky time , Can not fall near a small pit , Try weights many times , You may find a good global point .
Q1 : Smooth l1 loss and l2 What's the difference? ?
A1 : Yinxiangnan's answer | A few answers
Q2 : Faster RCNN?
A2 : Bai Shang's article
Q3 : Summary of optimization methods (BGD,SGD,Momentum,AdaGrad,RMSProp,Adam)
A3 : CSDN
Q4 : Performance evaluation criteria of the model
A4 : Nuggets
Q5 : Deep learning number of training epochs Medium epoch What do you mean ?
A5 :
(1)iteration: Express 1 Sub iteration ( Also called training step), Update every iteration 1 Parameters of secondary network structure ;
(2)batch-size:1 The sample size used in this iteration ;
(3)epoch:1 individual epoch It means that 1 Go through all the samples in the training set .
It is worth noting that , In the field of deep learning , Commonly used with mini-batch Random gradient descent algorithm for training deep structure , It has the advantage that it does not need to traverse all the samples , It is very effective when the amount of data is very large . here , It can be defined according to practical problems epoch, For example, definition 10000 The next iteration is 1 individual epoch, If each iteration batch-size Set to 256, that 1 individual epoch It's equivalent to passing 2560000 Training samples .
Q6 : In the neural network , Activation function sigmoid and tanh Is there any difference except the threshold value ?
A6 : Daniel Answer
Q7 : Batch Normalization Principle and its benefits ?
A7 : Juliuszh The article | Tianyusu's article
Q8 : YOLO v1~v3 understand ?
A8 : YOLO v1 In depth understanding of | YOLO v3 In depth understanding of
Q9 : Group Normalization?
A9 : CSDN | DoubleV The article | Eminem1147 The article
Q10 : Inception-v1~v4 understand ?
A10 : I am a young general's article | BLOG
Q11 : RoIPooling、RoIAlign Is there any difference ?
A11 : CSDN
Q12 : Mask RCNN principle ?
A12 : stone The article
Q13 : Convolutional neural networks (CNN) Principle of back propagation algorithm ?
A13 : BLOG
Q14 : Saddle point interpretation ?
A14 : Xixiaoyao's cute house
Q15 : Softmax Derivation ?
A15 : CSDN
Q16 : Various optimization methods in neural network ?
A16 : CSDN
Q17 : FPN Detailed explanation .
A17 : CSDN
Q18 : SVM Detailed explanation .
A18 : CSDN
Q19 : LR Why take log Loss function , And why take logarithm after likelihood function calculation .
A19 :
(1) The essence of the loss function is , If we're right , Be able to go unpunished , If the prediction is wrong , Will cause the loss function to become very large , That is to say, the punishment is greater , and -log Function in [-1,1] Just in line with this , There is another point to be made ,LR It is a generalized linear regression model , Square loss function , about Sigmoid Function derivation calculation , There is no guarantee that it is a convex function , In the process of optimization , The solution obtained may be a local minimum , Not the global optimal value . second : After taking the logarithm , It's convenient for our subsequent derivation .
(2) If it is calculated directly from the likelihood function , There are two disadvantages :(1) It is not conducive to the subsequent derivation ,(2) The calculation of likelihood function will lead to lower overflow .( The final calculation is the result of multiplication , And the value after each calculation is very small , If there are many samples , It leads to overflow . such as 0.01 x 0.01 x 0.01…)
Specific derivation process :CSDN
Q20 : LR Collinearity problems and solutions in .
A20 : CSDN
Q21 : LR And SVM similarities and differences .
A21 : CSDN
Q22 : SVM Mathematical basis of .
A22 : CSDN
Q23 : Convolutional neural networks (CNN) Formula derivation .
A23 : campoo
Q24 : IOU Computational code .
A24 : Reference
Q25 : Faster RCNN details .
A25 : BLOG
Q26 : How to understand hole convolution (dilated convolution).
A26 : zhihu
Q27 : How to understand... In deep learning deconvolution networks.
A27 : zhihu
Q28 : YOLOv2、v3 Use K-means Clustering calculation anchor boxes The specific method of .
A28 : CSDN
Q29 : L1 and L2 What is the purpose of regularization , What do you mean .
A29 : zhihu
Q30 : KL The relationship between divergence and cross entropy .
A30 : zhihu
Q31 : CNN Receptive field size calculation .
A31 : CSDN
边栏推荐
- Learning notes | data Xiaobai uses dataease to make a large data screen
- 常用sql语句整理:mysql
- MPX plug-in
- 在我有限的软件测试经历里,一段专职的自动化测试经验总结
- Unsupervised learning of visual features by contracting cluster assignments
- Static semantic check of clang tidy in cicd
- 90后,辞职创业,说要卷死云数据库
- 基于华为云IOT设计智能称重系统(STM32)
- 关于在云服务器上(这里用腾讯云)安装mysql8.0并使本地可以远程连接的方法
- Template initial level template
猜你喜欢
Verilog design responder [with source code]
2021-05-21
The opacity value becomes 1%
Still cannot find RPC dispatcher table failed to connect in virtual KD
从色情直播到直播电商
Apprentissage comparatif non supervisé des caractéristiques visuelles par les assignations de groupes de contrôle
Array object sorting
通过 Play Integrity API 的 nonce 字段提高应用安全性
自动化测试框架
PostgreSQL中的表复制
随机推荐
RationalDMIS2022 高级编程宏程序
Avoid mutating a prop directly since the value will be overwritten whenever the parent component
浙江大学周亚金:“又破又立”的顶尖安全学者,好奇心驱动的行动派
There are ways to improve self-discipline and self-control
Table replication in PostgreSQL
Case study of Jinshan API translation function based on retrofit framework
[untitled]
2021-04-08
Static semantic check of clang tidy in cicd
MPX plug-in
软件设计之——“高内聚低耦合”
聊聊SOC启动(十) 内核启动先导知识
普通测试年薪15w,测试开发年薪30w+,二者差距在哪?
vim 的各种用法,很实用哦,都是本人是在工作中学习和总结的
Distributed database master-slave configuration (MySQL)
STM32 entry development uses IIC hardware timing to read and write AT24C08 (EEPROM)
基于Retrofit框架的金山API翻译功能案例
Using ENSP to do MPLS pseudo wire test
對比學習之 Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
audit 移植