当前位置:网站首页>Ijkplayer code walk through read_ AV in thread thread_ read_ Detailed explanation of frame() data stream reading process
Ijkplayer code walk through read_ AV in thread thread_ read_ Detailed explanation of frame() data stream reading process
2022-06-13 06:29:00 【That's right】
review
ijkplayer Startup process :
- The user is in Android In the program , Call the encapsulation interface IjkLibLoader Method , load ijkffmpeg、ijksdl and ijkplayer Three library files to the Android system ;
- Initialize player , Called JNI interface program native_setup() function , This function creates a player message queue and plays its related parameters ;
- The user is in Android In the program , call createPlayer() and prepareAsync() Encapsulate interface function to create player , And let the player enter the state to be played ;
- Start the player .
We analyzed that earlier prepareAsync() Function related content , One of the more important functions is VideoState *is = stream_open(ffp, file_name, NULL);
In the function :
- establish 3 A queue , video 、 Audio and subtitle queues ,is Three streams are stored in AVStream *audio_st、*subtitle_st、*video_st.
- establish 2 Threads ,read_thread() and video_refresh() Threads ;
- Initialize decoder related parameters , Function exit .
The player has the ability to play , This process covers ijkplayer Most of the source code ; Enter the playback process , The program logic architecture has been established , During operation, it can
Handle some user switching functions .
Let's look back read_thread() Thread function , Summarized below :
call avformat_open_input() function , This function selects the network protocol according to the data source 、 Unpacker category , Through users URL Address keyword differentiation ,
Such as : “tcpext://192.168.1.31:1717/v-out.h264”, ijkplayer After playing, it can be resolved that the protocol is tcpext、 The unpacker is h264 The way .call avformat_find_stream_info(ic, opts) function , This function identifies the encoding format through the data stream data , It is concluded that the data flow should be configured
What kind of decoder . When entering this function , The AVFormatContext The number and type of midstream will be determined , When was it confirmed ? In doubt .call stream_component_open(ffp, st_index[AVMEDIA_TYPE_VIDEO]) function , A decoder is constructed for the data stream according to the data stream information ,
For video 、 Audio and subtitle stream types , Configure decoder .Enter the loop body of the thread ,av_read_frame(ic, pkt) -> packet_queue_put(&is->videoq, ©), Read -> The process of joining the team is repeated .
General description of the main logic of the program , That's the logic .
This article mainly analyzes the data stream reading process , Clear goals , Enable code walk mode .
read_thread() Threads
Let's see read_thread() This part of the thread is related to simplified logic code , as follows :
void read_thread(void *arg)
{
FFPlayer *ffp = arg; ///> This parameter is Android User space passes related parameters
VideoState *is = ffp->is;
AVFormatContext *ic = NULL;
int err, i, ret __unused;
int st_index[AVMEDIA_TYPE_NB];
AVPacket pkt1, *pkt = &pkt1;
///> Data stream coding format identification part code 1 part
if (ffp->find_stream_info) {
AVDictionary **opts = setup_find_stream_info_opts(ic, ffp->codec_opts); ///> Get the decoder parameter dictionary pointer
int orig_nb_streams = ic->nb_streams;
do {
if (av_stristart(is->filename, "data:", NULL) && orig_nb_streams > 0) {
for (i = 0; i < orig_nb_streams; i++) {
if (!ic->streams[i] || !ic->streams[i]->codecpar || ic->streams[i]->codecpar->profile == FF_PROFILE_UNKNOWN) {
break;
}
}
if (i == orig_nb_streams) {
break;
}
}
err = avformat_find_stream_info(ic, opts); ///> Enter the matching process , In the decoder options In the dictionary flag Identify the type of flow
} while(0);
ffp_notify_msg1(ffp, FFP_MSG_FIND_STREAM_INFO);
}
is->realtime = is_realtime(ic);
av_dump_format(ic, 0, is->filename, 0);
///> Data stream coding format identification part code 2 part
int video_stream_count = 0;
int h264_stream_count = 0;
int first_h264_stream = -1;
for (i = 0; i < ic->nb_streams; i++) {
AVStream *st = ic->streams[i];
enum AVMediaType type = st->codecpar->codec_type;
st->discard = AVDISCARD_ALL;
if (type >= 0 && ffp->wanted_stream_spec[type] && st_index[type] == -1)
if (avformat_match_stream_specifier(ic, st, ffp->wanted_stream_spec[type]) > 0)
st_index[type] = i;
// choose first h264
if (type == AVMEDIA_TYPE_VIDEO) {
enum AVCodecID codec_id = st->codecpar->codec_id;
video_stream_count++;
if (codec_id == AV_CODEC_ID_H264) {
h264_stream_count++;
if (first_h264_stream < 0)
first_h264_stream = i;
}
}
av_log(NULL, AV_LOG_INFO, "DEBUG %s, LINE:%d ,CODEC_ID:%d\n",__FILE__, __LINE__, (uint32_t)st->codecpar->codec_id);
}
///> In case of multi stream mode
if (video_stream_count > 1 && st_index[AVMEDIA_TYPE_VIDEO] < 0) {
st_index[AVMEDIA_TYPE_VIDEO] = first_h264_stream;
av_log(NULL, AV_LOG_WARNING, "multiple video stream found, prefer first h264 stream: %d\n", first_h264_stream);
}
///> Match the decoder one by one
if (!ffp->video_disable)
st_index[AVMEDIA_TYPE_VIDEO] =
av_find_best_stream(ic, AVMEDIA_TYPE_VIDEO,
st_index[AVMEDIA_TYPE_VIDEO], -1, NULL, 0);
if (!ffp->audio_disable)
st_index[AVMEDIA_TYPE_AUDIO] =
av_find_best_stream(ic, AVMEDIA_TYPE_AUDIO,
st_index[AVMEDIA_TYPE_AUDIO],
st_index[AVMEDIA_TYPE_VIDEO],
NULL, 0);
if (!ffp->video_disable && !ffp->subtitle_disable)
st_index[AVMEDIA_TYPE_SUBTITLE] =
av_find_best_stream(ic, AVMEDIA_TYPE_SUBTITLE,
st_index[AVMEDIA_TYPE_SUBTITLE],
(st_index[AVMEDIA_TYPE_AUDIO] >= 0 ?
st_index[AVMEDIA_TYPE_AUDIO] :
st_index[AVMEDIA_TYPE_VIDEO]),
NULL, 0);
is->show_mode = ffp->show_mode;
///* open the streams, Open stream format */
if (st_index[AVMEDIA_TYPE_AUDIO] >= 0) {
stream_component_open(ffp, st_index[AVMEDIA_TYPE_AUDIO]);
} else {
ffp->av_sync_type = AV_SYNC_VIDEO_MASTER;
is->av_sync_type = ffp->av_sync_type;
}
ret = -1;
if (st_index[AVMEDIA_TYPE_VIDEO] >= 0) {
ret = stream_component_open(ffp, st_index[AVMEDIA_TYPE_VIDEO]);
}
if (is->show_mode == SHOW_MODE_NONE)
is->show_mode = ret >= 0 ? SHOW_MODE_VIDEO : SHOW_MODE_RDFT;
if (st_index[AVMEDIA_TYPE_SUBTITLE] >= 0) {
stream_component_open(ffp, st_index[AVMEDIA_TYPE_SUBTITLE]);
}
ffp_notify_msg1(ffp, FFP_MSG_COMPONENT_OPEN);
///> notice android Space program
if (!ffp->ijkmeta_delay_init) {
ijkmeta_set_avformat_context_l(ffp->meta, ic);
}
///> Set the state of the dictionary item
ffp->stat.bit_rate = ic->bit_rate;
if (st_index[AVMEDIA_TYPE_VIDEO] >= 0)
ijkmeta_set_int64_l(ffp->meta, IJKM_KEY_VIDEO_STREAM, st_index[AVMEDIA_TYPE_VIDEO]);
if (st_index[AVMEDIA_TYPE_AUDIO] >= 0)
ijkmeta_set_int64_l(ffp->meta, IJKM_KEY_AUDIO_STREAM, st_index[AVMEDIA_TYPE_AUDIO]);
if (st_index[AVMEDIA_TYPE_SUBTITLE] >= 0)
ijkmeta_set_int64_l(ffp->meta, IJKM_KEY_TIMEDTEXT_STREAM, st_index[AVMEDIA_TYPE_SUBTITLE]);
///> Player state adjustment
ffp->prepared = true;
ffp_notify_msg1(ffp, FFP_MSG_PREPARED);
if (ffp->auto_resume) {
ffp_notify_msg1(ffp, FFP_REQ_START);
ffp->auto_resume = 0;
}
/* offset should be seeked*/
if (ffp->seek_at_start > 0) {
ffp_seek_to_l(ffp, (long)(ffp->seek_at_start));
}
///> Enter the looping state , Thread loop body
for (;;){
///>
if (is->queue_attachments_req) {
///> Configure this ID when opening a stream = 1
if (is->video_st && (is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC)) {
AVPacket copy = {
0 };
if ((ret = av_packet_ref(©, &is->video_st->attached_pic)) < 0)
goto fail;
packet_queue_put(&is->videoq, ©);
packet_queue_put_nullpacket(&is->videoq, is->video_stream);
}
is->queue_attachments_req = 0;
}
///>
pkt->flags = 0;
ret = av_read_frame(ic, pkt);
///>
if (pkt->flags & AV_PKT_FLAG_DISCONTINUITY) {
if (is->audio_stream >= 0) {
packet_queue_put(&is->audioq, &flush_pkt);
}
if (is->subtitle_stream >= 0) {
packet_queue_put(&is->subtitleq, &flush_pkt);
}
if (is->video_stream >= 0) {
packet_queue_put(&is->videoq, &flush_pkt);
}
}
///>
if (pkt->stream_index == is->audio_stream && pkt_in_play_range) {
packet_queue_put(&is->audioq, pkt);
} else if (pkt->stream_index == is->video_stream && pkt_in_play_range
&& !(is->video_st && (is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC))) {
packet_queue_put(&is->videoq, pkt);
} else if (pkt->stream_index == is->subtitle_stream && pkt_in_play_range) {
packet_queue_put(&is->subtitleq, pkt);
} else {
av_packet_unref(pkt);
}
///>
ffp_statistic_l(ffp);
av_log(NULL, AV_LOG_INFO, " %s / %s , LINE:%d \n",__FILE__, __func__, __LINE__);
}
}
This program is a simplified version of the structure , Each node has annotation information .
Read data stream
stay read_thread() Threads execute av_read_frame(ic, pkt) Function loop reads the contents of the data stream , Trace and sort out the function call relationship as follows .
av_read_frame(ic, pkt); ///> Entrance parameters : AVFormatContext *ic
-> read_frame_internal(s, pkt);
-> ff_read_packet(s, &cur_pkt); ///> Entrance parameters : AVPacket cur_pkt;
-> av_init_packet(pkt);
-> s->iformat->read_packet(s, pkt); ///> here read_packet Call yes ff_raw_read_partial_packet(AVFormatContext *s, AVPacket *pkt) function , stay libavformat/rawdec.c In file
-> av_new_packet(pkt, size)
-> avio_read_partial(s->pb, pkt->data, size); ///> Entrance parameters : AVIOContext s->pb, Function in libavformat/aviobuf.c In file
-> s->read_packet(s->opaque, buf, size); ///> this read_packet Call yes io_read_packet() function , This function eventually calls tcp_read() function , See analysis below .
-> memcpy(buf, s->buf_ptr, len);
-> s->buf_ptr += len;
-> return len;
-> av_shrink_packet(pkt, ret);
-> av_parser_init(st->codecpar->codec_id)
-> avcodec_get_name(st->codecpar->codec_id)
-> compute_pkt_fields(s, st, NULL, pkt, AV_NOPTS_VALUE, AV_NOPTS_VALUE)
-> read_from_packet_buffer(&s->internal->parse_queue, &s->internal->parse_queue_end, pkt)
-> update_stream_avctx(s);
-> add_to_pktbuf(&s->internal->packet_buffer, pkt,&s->internal->packet_buffer_end, 1);
Let's analyze the relationship between the entry parameters , Function call relationship io_read_packet() -> ffurl_read() -> tcp_read() Guide map .
First, sort out the function entry parameters , as follows .
The first parameter of the function entry AVFormatContext -> AVIOContext -> opaque Import parameter source relation map , stay AVIOContext Structure is defined as void * opaque type .
///> The function entry parameter is s->opaque Content
static int io_read_packet(void *opaque, uint8_t *buf, int buf_size)
{
AVIOInternal *internal = opaque; ///> The direct assignment here is converted to AVIOInternal The pointer , From the entrance AVIOContext s->pb,
return ffurl_read(internal->h, buf, buf_size); ///> ffurl_read Call yes tcp_read() Added functions in the private protocol .
}
//> Structure AVIOInternal The definition is as follows
typedef struct AVIOInternal {
URLContext *h;
} AVIOInternal;
//> Structure URLContext The definition is as follows
typedef struct URLContext {
const AVClass *av_class; /**< information for av_log(). Set by url_open(). */
const struct URLProtocol *prot;
void *priv_data;
char *filename; /**< specified URL */
int flags;
int max_packet_size; /**< if non zero, the stream is packetized with this max packet size */
int is_streamed; /**< true if streamed (no seek possible), default = false */
int is_connected;
AVIOInterruptCB interrupt_callback;
int64_t rw_timeout; /**< maximum time to wait for (network) read/write operation completion, in mcs */
const char *protocol_whitelist;
const char *protocol_blacklist;
int min_packet_size; /**< if non zero, the stream is packetized with this min packet size */
int64_t pts; ///< increase pts Variable
} URLContext;
///> The entry of this function URLContext *h The pointer content type is shown in the figure above .
static int tcp_read(URLContext *h, uint8_t *buf, int size)
{
uint8_t header[HEADER_SIZE];
TCPEXTContext *s = h->priv_data;
int ret;
if (!(h->flags & AVIO_FLAG_NONBLOCK)) {
ret = ff_network_wait_fd_timeout(s->fd, 0, h->rw_timeout, &h->interrupt_callback);
if (ret)
return ret;
}
ret = recv(s->fd, header, HEADER_SIZE, MSG_WAITALL);
if(ret < HEADER_SIZE){
av_log(NULL, AV_LOG_INFO, "%s/%s(), LINE:%d ,READ_HEADER_AIL length:%d \n",__FILE__, __func__, __LINE__, ret);
return 0;
}
uint32_t msb = header[0] << 24 | header[1] << 16 | header[2] << 8 | header[3];
uint32_t lsb = header[4] << 24 | header[5] << 16 | header[6] << 8 | header[7];
uint32_t len = header[8] << 24 | header[9] << 16 | header[10] << 8 | header[11];
uint64_t pts = msb << 32 | lsb ;
av_log(NULL, AV_LOG_INFO, "READ HEADER msb:%08x, lsb:%08x, len:%08x \n", msb, lsb, len);
assert( pts == NO_PTS || (pts & 0x8000000000000000) == 0);
assert(len);
ret = recv(s->fd, buf, len, MSG_WAITALL);
if (ret > 0){
av_application_did_io_tcp_read(s->app_ctx, (void*)h, ret);
uint32_t hsb = buf[0] << 24 | buf[1] << 16 | buf[2] << 8 | buf[3];
msb = buf[4] << 24 | buf[5] << 16 | buf[6] << 8 | buf[7];
lsb = buf[8] << 24 | buf[9] << 16 | buf[10] << 8 | buf[11];
av_log(NULL, AV_LOG_INFO, "H264 HEADER hsb:%08x, msb:%08x, lsb:%08x \n", hsb, msb, lsb);
}
av_log(NULL, AV_LOG_INFO, "%s/%s(), LINE:%d ,recv length:%d \n",__FILE__, __func__, __LINE__, ret);
return ret < 0 ? ff_neterrno() : ret;
}
summary :
- 1>. Threads read_thread() Definition AVFormatContext ic,AVPacket pkt1, Global variables , function av_read_frame(ic, pkt); Entrance parameters
It's all global variables ,tcp_read Function entrance h = (URLContext)ic->pb->opaque, buf = pkt->data Parameters . - 2>. Only in URLContext *h In the store pts data , In addition to the private unpacker ff_raw_read_partial_packet() Function , hold pts Value transcribe
To pkt->pts in . - 3>. stay ijkplayer Of sdk in , Add private communication protocol and private unpacker 、 The program processing idea is basically similar , This module is for reference .
Obtained here packet_buffer Among objects pts value , That is to say, when reading data, the current pts Put the content in ,
边栏推荐
- High burst solution 2
- Super model logo online design and production tool
- Thread pool learning
- [JS] array de duplication
- Recent problems
- PHP redis makes high burst spike
- 【Kernel】驱动编译的两种方式:编译成模块、编译进内核(使用杂项设备驱动模板)
- 华为开发者认证与DevEco Studio编译器下载
- Kotlin collaboration - flow+room database
- You should consider upgrading via
猜你喜欢

Explication détaillée du triangle Yang hui

Wechat applet (get location)

Wechat applet custom tabbar (session customer service) vant
![[solution] camunda deployment process should point to a running platform rest API](/img/ef/5b893e9c315c10db6c1db46b4c3f5a.jpg)
[solution] camunda deployment process should point to a running platform rest API

JVM Foundation

Dragon Boat Festival wellbeing, use blessing words to generate word cloud

Basic knowledge of knowledge map

Huawei developer certification and deveco studio compiler Download

c语言对文件相关的处理和应用

SSM framework integration -- > simple background management
随机推荐
Echart line chart: different colors are displayed when the names of multiple line charts are the same
免费录屏软件Captura下载安装
Wechat applet custom tabbar (session customer service) vant
Failed to extract manifest from apk: processexception:%1 is not a valid Win32 Application.
【案例】一个超级好用的工具 —— 给程序员用的计算器
Uni app upload file
The boys x pubgmobile linkage is coming! Check out the latest game posters
Common websites and tools
Download and installation of universal player potplayer, live stream m3u8 import
Wechat applet: use of global state variables
Kotlin collaboration -- context and exception handling
Uni app dynamic setting title
[written examination questions of meituan]
Uniapp dynamically shows / hides the navigation bar return button
Wechat applet (get location)
推荐扩容工具,彻底解决C盘及其它磁盘空间不够的难题
Commit specification
Detailed explanation of Yanghui triangle
Echart histogram: echart implements stacked histogram
超有范的 logo 在线设计制作工具