当前位置:网站首页>Player actual combat 21 audio and video synchronization
Player actual combat 21 audio and video synchronization
2022-06-12 14:10:00 【Sister Suo】
1. summary
From the frame rate of video and the sampling rate of audio , You can know the video / Audio playback speed . Both sound card and video card take one frame of data as the playback unit , If you play strictly according to the frame rate and sampling rate , Under ideal conditions , It should be synchronized , There will be no deviation .
But in reality , If you use the simple way above , Slowly, the audio and video will be out of sync , If it wasn't for the video playing fast , Or the audio is playing fast . The possible reasons are as follows :
The playback time of one frame , Difficult to control accurately . Audio and video decoding and rendering take different time , It may cause a slight difference in the output of each frame , Long term accumulation , Asynchrony becomes more and more obvious .( For example, limited by performance ,42ms To output a frame )
The audio output is linear , The video output may be nonlinear , This leads to a deviation .
therefore , Solve the problem of audio and video synchronization , Time stamp is introduced :
First select a reference clock ( The time on the reference clock is required to be linearly increasing );
When encoding, time stamp each audio and video data block according to the on the reference clock ;
When the play , According to the audio and video timestamp and reference clock , To adjust the playback .
therefore , The synchronization of video and audio is actually a dynamic process , Synchronization is temporary , Asynchrony is the norm .
Because people are more sensitive to sound than to pictures , And the sound is played linearly , The video frame is non-linear , Because the strategy of video and audio synchronization is usually adopted , I.e. normal audio playback , If the video playback is fast, reduce the video playback rate , If it is slow, it will speed up the playback rate
detailed :https://blog.csdn.net/myvest/article/details/97416415
2. The audio clock
Unpack the context ic And point to pkt Call as a parameter with a pointer to av_read_frame() Read the unpacked contents , Put the read content into pkt in :
AVPacket* xdemux:: readfz()
{
mux.lock();
if (!ic)
{
mux.unlock();
return 0;
}
AVPacket* pkt=av_packet_alloc();// Just object space , No data space allocated
int re=av_read_frame(ic, pkt);// Allocate data space
if (re!=0)// Open it up re yes 0
{
mux.unlock();
av_packet_free(&pkt);
return 0;
}
pkt->pts = pkt->pts * (1000 * (r2d(ic->streams[pkt->stream_index]->time_base)));
pkt->dts = pkt->dts * (1000 * (r2d(ic->streams[pkt->stream_index]->time_base)));
mux.unlock();
cout << " pkt->dts" << pkt->dts << endl;
return pkt;
}

You can see pkt Of dts Is increasing , And there are intervals , When decoding , Whether or not there is B frame ,dts It's all incremental , without B Frame words dts And pts Agreement , If any dts And pts It's different
stay xdecode Class pts member :
long long pts = 0;
Indicates the currently decoded pts
stay xdecode Class receive() Add pts = frame->pts; To update the currently decoded pts,
(frame and pkt Of pts difference ?)

You can see xaudiothread Class and xdemux Class isolated , It can only call xdecode class ,xresample class ,xaudioplay class , So it uses xdecode Class pts without xdemux Inside pts
add to pts = adecode->pts-audioplay->GetNoPlayMs();
cout << “audio pts:” << pts << endl;
Once every time recevie perhaps receive To the pts There are gaps between them , This pts It is the present. packet perhaps frame The last pts, But now time may not have come to the last pts, But before pts, So use the above pts( from packet Of pts Copy it , Converted to milliseconds ) Subtract the time that has not been played in the player ,
void xaudiothread::run()
{
cout << " Start audio thread " << endl;
unsigned char* pcm = new unsigned char[1024 * 1024 * 10];
while (!isexit)
{
mux.lock();
if (packs.empty() || !adecode || !resample || !audioplay)
{
mux.unlock();
msleep(1);
continue;
}
AVPacket* pkt = packs.front();
packs.pop_front();
bool re = adecode->send(pkt);
if (!re)
{
mux.unlock();
msleep(1);
continue;
}
// Maybe once send many times receive
while (!isexit)
{
AVFrame* frame = adecode->receive();
if (!frame)break;
pts = adecode->pts-audioplay->GetNoPlayMs();
cout << "audio pts:" << pts << endl;
int size = resample->Resample(frame, pcm);//Resample Will be released in frame
while (!isexit)
{
if (size <= 0)break;
if (audioplay->Getfree() < size)
{
msleep(1);
continue;
}
audioplay->write(pcm, size);
break;
}
}
mux.unlock();
}
delete pcm;
}
stay xaudioplay Subclasses of audioplay Add the function to get the time that has not been played : Divide the number of bytes that have not been played by the byte size of one second of audio , Multiplied by 1000, Get time that hasn't been played yet ( millisecond )
virtual long long xaudioplay::GetNoPlayMs()
{
mux.lock();
if (!output)
{
mux.unlock();
return 0;
}
long long pts = 0;
// Number of bytes not played yet
double size = output->bufferSize() - output->bytesFree();
// Calculate the byte size of one second audio
double secsize=samplerate* (samplesize / 8)* channels;
if (secsize <= 0)pts = 0;
pts = size / secsize * 1000;
mux.unlock();
return pts;
}
3. Sync audio with video
stay xvideothread Add member to class
long long synpts = 0;
Every time open Video thread time setting 0, Transmitted by an external unpacking thread to an audio thread pts updated :
` if (vt && at)
{
vt->synpts = at->pts;
}`
In the video thread run Audio and video synchronization in :
When the current video decoder gets pts Less than the current audio thread pts when , Unlock to give thread resources to other threads , Wait a second before making a judgment , To delay the playback of the video , Make the audio play catch up with the video play
if (synpts < vdecode->pts)
{
mux.unlock();
//msleep(1);
continue;
}
边栏推荐
- Bridging and net
- Codeforces Round #798 (Div. 2)(A~D)
- After reading the question, you will point to offer 16 Integer power of numeric value
- What is the default gateway
- 编译安装基于fastcgi模式的多虚拟主机的wordpress和discuz的LAMP架构
- Brush one question every day /537 Complex multiplication
- SystemC uses SC_ report_ Handler processing log printing
- Summary of virtual box usage problems
- Axi4 increase burst / wrap burst/ fix burst and narrow transfer
- Introduction to color coding format
猜你喜欢

拆改广告机---业余解压

如何使用android studio制作一个阿里云物联网APP

Lua tvalue structure

Redis核心配置和高级数据类型

Socket model of punctual atom stm32f429 core board

Design of PLC intelligent slave station based on PROFIBUS DP protocol

Relevant knowledge points of cocoapods

阿里云开发板HaaS510将串口获取数据发送到物联网平台

Leetcode 2176. 统计数组中相等且可以被整除的数对
Introduction to color coding format
随机推荐
Why do Chinese programmers change jobs?
通信流量分析
TestEngine with ID ‘junit-vintage‘ failed to discover tests
The original Xiaoyuan personal blog project that has been around for a month is open source (the blog has basic functions, including background management)
如果要打造品牌知名度,可以选择什么出价策略?
阿里云开发板vscode开发环境搭建
Language skills used in development
Dial up and Ethernet
Leetcode questions brushing February /1020 Number of enclaves
SystemC simulation scheduling mechanism
Is Shell Scripting really a big technology?
Pay attention to click and pursue more users to enter the website. What bidding strategy can you choose?
Go language functions as parameters of functions
Interview question 17.14 Minimum number of K (almost double hundreds)
SystemC uses SC_ report_ Handler processing log printing
Alibaba cloud development board haas510 connects to the Internet of things platform -- Haas essay solicitation
Shell脚本到底是什么高大上的技术吗?
Relevant knowledge points of cocoapods
Lua tvalue structure
Tree reconstruction (pre order + middle order or post order + middle order)