当前位置:网站首页>Ut2013 learning notes
Ut2013 learning notes
2022-07-03 10:29:00 【IFI_ rccsim】
Challenge description
A.SPL Challenge
SPL The challenge requires every team to have players , If there are six teams participating , Each team contributes One or two substitute players . If you use two players from the same team , They are playing on the same side . Both teams are composed of randomly selected substitutes , Each contestant participates 4 A substitute game , Every game lasts 5 minute . Shorter games are used to allow more games to be played within a specified time . In the normal SPL In the game , The goalkeeper is designated as a goalkeeper at the beginning of the game . In the challenge , The first defender to enter the penalty area became the goalkeeper of the game .
During the challenge , Players are allowed to communicate with each other using a simple protocol , But this kind of communication is unnecessary . The agreement allows communication of the player and the position of the ball , Variance of player and ball position ( uncertainty ), The speed of the ball , Time since the last time I saw the ball , And whether the robot falls or is punished .
SPL The challenge uses two indicators to score : Average goal difference and average score of three referees . These two scoring indicators are combined to determine SPL Challenge champion . Human referees are used to help Identify good teamwork ability in agents , And in a limited number of games Reduce the influence of random variance . For every game , Every referee is asked to score every player , Score at 0( Bad ) To 10( good ) Between . The judges were asked to focus on the ability of teamwork , Not personal skills , Such a robot with low skills can still get high scores for good teamwork .
B. Two dimensional challenge
about 2D Substitute challenge , Every team has contributed Two substitute players , Two of the teams are composed of substitutes . The competition includes two 5 Half a minute , Team has 7 player , Not the standard one 11 player . The number of players has been changed , Because most teams release based on the same code , It's for 11 Player teams provide default formation , Provide implicit coordination for these teams . The seventh player of each team is from agent2d Our goalkeeper , Coach is not used in the challenge . Substitute players are encouraged to use default agents 2d Communication protocol for communication , But it's not necessary .
Players pass Average goal difference score ; There is no human referee . In order to accurately measure their performance , Each team plays at least one game with each other's opponents . A total of nine teams participated in this challenge . Game pairing is a Greedy algorithm Select the , The algorithm tries to balance the number of cooperation and confrontation between agents from different teams , Such as Alg.1 Shown . The algorithm is universal , It can be applied to other special team cooperation settings . When all agent With other teams At least one game when , The algorithm stops .
C.3D Challenge
about 3D Substitute challenge , Each team contributed to a match Two substitute players . The game lasted for two 5 Half a minute , Both teams are made up of substitute players . In the challenge There is no goalkeeper To increase the possibility of scoring .
Every substitute can communicate with his teammates about the ball and his position , But using this protocol is optional . The challenge score is only obtained by the agent in all competitions Average goal difference decision . During the challenge, there were four temporary matches ,10 All teams participated in every competition . The team matching of the competition is made by Alg.1 decision . So every substitute player plays at least one game .
Greedy algorithm gets Locally optimal solution , Select the conditions that should meet the local optimization
Dynamic programming is to find the global optimal solution
The basic idea : 1. Build a mathematical model to describe the problem
2. Divide the problem into several sub problems
3. Solve every subproblem , Get the local optimal solution of the subproblem
4. The local optimal solution corresponding to the subproblem is synthesized into a near of the original whole problem Quasi optimal solution
Analysis of competition results
* SPL( Standard group ) match
* drop-in challenges The team with good performance in the team has strong cooperation ability , But not necessarily the best low-level skills
* On the whole , Race results and drop-in challenge Is positively correlated
* 2D match
* The race data sample is not enough , Later, the teams played against UT The standard team matches are 1000 site , The average value and standard deviation of goal difference are recorded
* drop-in challenge There is no significant direct relationship with the race results , But in the standard team competition ( Zheng sai ) Teams that do well in tend to do well in drop-in challenge Better performance in
* drop-in challenge An important link is adaptation ( From other teams ) Teammate
* Dynamic role assignment can be improved in drop-in challenge Performance in
* 3D match
* drop-in challenge There is no significant direct relationship with the race results , But in drop-in challenge The team that performs well in the team tends to compete in the standard team ( Zheng sai ) Better performance in
* Drop-in Challenge Strategic analysis
* Build four teams to run and V-C The same as described in part drop-in match
* Dribble: Never play football , With ball only
* DynamicRoles: Dynamic role assignment
* NoKickoff: Do not pass to the side of the ball to kick off ?
* UTAustinVilla: The program used in the competition
* Dribble > UTAV > NoKickoff > DynamicRoles
* Conclusion
* When no teammate kicks off, passing it to the side of the ball to kick off will help improve the performance (UTAV > NoKickoff)
* play football / Passing will negatively affect the result . Probably because UT Low level skills are better than drop-in Teammate , So passing the ball to them will reduce the level of the team
边栏推荐
- Leetcode刷题---44
- 一个30岁的测试员无比挣扎的故事,连躺平都是奢望
- CV learning notes - BP neural network training example (including detailed calculation process and formula derivation)
- mysql5.7安装和配置教程(图文超详细版)
- CV learning notes - reasoning and training
- Hands on deep learning pytorch version exercise answer - 2.2 preliminary knowledge / data preprocessing
- Anaconda installation package reported an error packagesnotfounderror: the following packages are not available from current channels:
- Convolutional neural network (CNN) learning notes (own understanding + own code) - deep learning
- 20220601 Mathematics: zero after factorial
- 20220606 Mathematics: fraction to decimal
猜你喜欢
LeetCode - 715. Range 模块(TreeSet) *****
Leetcode刷题---367
Realize an online examination system from zero
CV learning notes - deep learning
Leetcode-106: construct a binary tree according to the sequence of middle and later traversal
一步教你溯源【钓鱼邮件】的IP地址
Hands on deep learning pytorch version exercise solution - 2.3 linear algebra
3.3 Monte Carlo Methods: case study: Blackjack of Policy Improvement of on- & off-policy Evaluation
ECMAScript--》 ES6语法规范 ## Day1
Policy Gradient Methods of Deep Reinforcement Learning (Part Two)
随机推荐
Advantageous distinctive domain adaptation reading notes (detailed)
一步教你溯源【钓鱼邮件】的IP地址
High imitation wechat
20220602数学:Excel表列序号
Boston house price forecast (tensorflow2.9 practice)
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow
One click generate traffic password (exaggerated advertisement title)
Stroke prediction: Bayesian
Out of the box high color background system
ECMAScript -- "ES6 syntax specification # Day1
Hands on deep learning pytorch version exercise answer - 2.2 preliminary knowledge / data preprocessing
Handwritten digit recognition: CNN alexnet
What useful materials have I learned from when installing QT
2.2 DP: Value Iteration & Gambler‘s Problem
Raspberry pie 4B installs yolov5 to achieve real-time target detection
【毕业季】图匮于丰,防俭于逸;治不忘乱,安不忘危。
Mise en œuvre d'OpenCV + dlib pour changer le visage de Mona Lisa
Hands on deep learning pytorch version exercise solution - 3.1 linear regression
[LZY learning notes dive into deep learning] 3.1-3.3 principle and implementation of linear regression
Pytorch ADDA code learning notes