当前位置:网站首页>Ffmpeg plays audio and video, time_ Base solves the problem of audio synchronization and SDL renders the picture
Ffmpeg plays audio and video, time_ Base solves the problem of audio synchronization and SDL renders the picture
2022-07-25 22:20:00 【Steal the moon by riding the wind】
Summary
csdn Information on , I benefited a lot . I would also like to take this opportunity , Share some previously written projects , Let these data stored on the computer be shared with more friends , Play a greater value . Of course , There may still be some in the project bug, I hope friends who have questions can raise it , Let's solve it together . come on. , The nation remains mobilized for brand new endeavors. !
Anba dash cam is a portable Gps Information dash cam 、 This article is based on C++ Develop a set of language MFC Application tools of framework , Analyze the recording of anba dash cam mp4 file , Obtain audio and video data for playback , And it is analyzed that gps The data is displayed on Baidu map , Through playback, you can visually see the position of the current vehicle , meanwhile , You can quickly browse whether the vehicle follows the specified route . It is helpful to analyze the actual driving route of the vehicle after the event .

ffmpeg Calling process :
Allocate space and initialize
av_mallocz();
av_register_all();
avformat_alloc_context();
Open file
avformat_open_input();
Find the decoder
avcodec_find_decoder();
Turn on the decoder , There are video decoders AVMEDIA_TYPE_VIDEO, Audio decoder AVMEDIA_TYPE_AUDIO, Subtitle decoder AVMEDIA_TYPE_SUBTITLE
avcodec_open2();
In addition, the subtitle decoder can be obtained through the following interfaces
avcodec_decode_subtitle2();
Read a frame of data
av_frame_alloc();
av_init_packet();
av_read_frame();
Jump to the designated position
av_seek_frame();
Release space
av_frame_free();
av_free();
close
avcodec_close();
avformat_close_input();
Video rendering uses SDL
The specific code is as follows , This is just logical code ,UI Please contact me if you need the interface , Phone number 18824182332( Wechat same number ), thank you !!
/*
========================================================================
File name: ztplayerDll.h
Module:
Author: Mid Tang studio (zt)18824182332
Create Time: 2016/12/10 10:41:00
Modify By:
Modify Date:
========================================================================
*/
#ifndef __ZONTTANG_ZTPLAYERDLL_H__
#define __ZONTTANG_ZTPLAYERDLL_H__
#define ZONGTANG_H_DLL_EXPORTS
#ifdef ZONGTANG_H_DLL_EXPORTS
#define ZONGTANGDLL_API __declspec(dllexport)
#else
#define ZONGTANGDLL_API __declspec(dllimport)
#endif
#define __STDC_CONSTANT_MACROS
#include "const.h"
typedef void(*FrameCallBack)(const AVPacket* packet);
typedef void(*FrameEndCallBack)();
typedef void(*AnalysisGpsEndCallBack)();
ZONGTANGDLL_API void initSDK(VideoState** p);
ZONGTANGDLL_API int openFile(char filepath[], int64_t& duration);
ZONGTANGDLL_API int setFrameCallback(FrameCallBack _callback);
ZONGTANGDLL_API int setFrameEndCallback(FrameEndCallBack _callback);
ZONGTANGDLL_API int initCodec();
ZONGTANGDLL_API int setWindownHandle(HWND handle);
ZONGTANGDLL_API int play();
ZONGTANGDLL_API int seek(int64_t timestamp);
ZONGTANGDLL_API int pause(bool enable);
ZONGTANGDLL_API bool getState();
ZONGTANGDLL_API int desSDK();
ZONGTANGDLL_API int setVolumeEnable(bool enable);
ZONGTANGDLL_API int setVolume(double value);
ZONGTANGDLL_API int analysisGps();
ZONGTANGDLL_API int setAnalysisGpsEndCallback(AnalysisGpsEndCallBack _callback);
ZONGTANGDLL_API int saveFrame(char** filename, LPTSTR Dir);
#endif/*
========================================================================
File name: ztplayerDll.cpp
Module:
Author: Mid Tang studio (zt)18824182332
Create Time: 2016/12/10 10:41:00
Modify By:
Modify Date:
========================================================================
*/
#pragma once
#include "stdafx.h"
#include "tools.h"
#include "mapUtils.h"
bool mPlaying = false;
VideoState *global_video_state;
FrameCallBack frameCallBack;
FrameEndCallBack frameEndCallBack;
AnalysisGpsEndCallBack analysisGpsEndCallBack;
PictureHandle* global_picture_handle;
BOOL m_Release;
#pragma region Audio module
int audio_decode_frame(VideoState *is, double *pts_ptr) {
int len1, len2, decoded_data_size, n;
AVPacket *pkt = &is->audio_pkt;
int got_frame = 0;
int64_t dec_channel_layout;
int wanted_nb_samples, resampled_data_size;
double pts = 0;
while (1) {
while (is->audio_pkt_size > 0) {
if (!is->audio_frame) {
if (!(is->audio_frame = av_frame_alloc())) {
return AVERROR(ENOMEM);
}
}
else
av_frame_unref(is->audio_frame);
if (is->audio_st == NULL)
{
break;
}
if (m_Release)
{
break;
}
/**
* When AVPacket When audio is installed in , It's possible that one AVPacket There are multiple AVFrame,
* Some decoders will only solve the first AVFrame, In this case, we have to decode the subsequent AVFrame
*/
len1 = avcodec_decode_audio4(is->audio_st->codec, is->audio_frame, &got_frame, pkt);
if (len1 < 0) {
is->audio_pkt_size = 0;
printf("break\n");
break;
}
is->audio_pkt_data += len1;
is->audio_pkt_size -= len1;
if (got_frame <= 0)
continue;
// So far, we have got a AVFrame
decoded_data_size = av_samples_get_buffer_size(NULL,
is->audio_frame->channels, is->audio_frame->nb_samples,
(AVSampleFormat)is->audio_frame->format, 1);
// Get this AvFrame Sound layout , Like stereo
dec_channel_layout =
(is->audio_frame->channel_layout
&& is->audio_frame->channels
== av_get_channel_layout_nb_channels(
is->audio_frame->channel_layout)) ?
is->audio_frame->channel_layout :
av_get_default_channel_layout(
is->audio_frame->channels);
// This AVFrame Number of samples per channel
wanted_nb_samples = is->audio_frame->nb_samples;
/**
* Next, judge our previous settings SDL Sound format set when (AV_SAMPLE_FMT_S16), Channel layout ,
* sampling frequency , Every AVFrame The number of samples per channel is the same as
* Get the AVFrame Are they the same , If there is any difference , We just need swr_convert The AvFrame,
* Then it can meet the previously set SDL The need for , To play
*/
if (is->audio_frame->format != is->audio_src_fmt
|| dec_channel_layout != is->audio_src_channel_layout
|| is->audio_frame->sample_rate != is->audio_src_freq
|| (wanted_nb_samples != is->audio_frame->nb_samples
&& !is->swr_ctx)) {
if (is->swr_ctx)
swr_free(&is->swr_ctx);
is->swr_ctx = swr_alloc_set_opts(NULL,
is->audio_tgt_channel_layout, is->audio_tgt_fmt,
is->audio_tgt_freq, dec_channel_layout,
(AVSampleFormat)is->audio_frame->format, is->audio_frame->sample_rate,
0, NULL);
if (!is->swr_ctx || swr_init(is->swr_ctx) < 0) {
fprintf(stderr, "swr_init() failed\n");
break;
}
is->audio_src_channel_layout = dec_channel_layout;
is->audio_src_channels = is->audio_st->codec->channels;
is->audio_src_freq = is->audio_st->codec->sample_rate;
is->audio_src_fmt = is->audio_st->codec->sample_fmt;
}
/**
* If above if Fail to judge , It will be initialized swr_ctx, It will be converted as scheduled
*/
if (is->swr_ctx) {
// const uint8_t *in[] = { is->audio_frame->data[0] };
const uint8_t **in =
(const uint8_t **)is->audio_frame->extended_data;
uint8_t *out[] = { is->audio_buf2 };
if (wanted_nb_samples != is->audio_frame->nb_samples) {
fprintf(stdout, "swr_set_compensation \n");
if (swr_set_compensation(is->swr_ctx,
(wanted_nb_samples - is->audio_frame->nb_samples)
* is->audio_tgt_freq
/ is->audio_frame->sample_rate,
wanted_nb_samples * is->audio_tgt_freq
/ is->audio_frame->sample_rate) < 0) {
fprintf(stderr, "swr_set_compensation() failed\n");
break;
}
}
/**
* Convert this AVFrame To the set SDL What you need , Some old code examples are mainly missing this part ,
* Often some audio can be broadcast , Some cannot be broadcast , That's why , For example, some source file audio happens to be AV_SAMPLE_FMT_S16 Of .
* swr_convert Return each channel after conversion (channel) The number of samples
*/
len2 = swr_convert(is->swr_ctx, out,
sizeof(is->audio_buf2) / is->audio_tgt_channels
/ av_get_bytes_per_sample(is->audio_tgt_fmt),
in, is->audio_frame->nb_samples);
if (len2 < 0) {
fprintf(stderr, "swr_convert() failed\n");
break;
}
if (len2 == sizeof(is->audio_buf2) / is->audio_tgt_channels / av_get_bytes_per_sample(is->audio_tgt_fmt)) {
fprintf(stderr, "warning: audio buffer is probably too small\n");
swr_init(is->swr_ctx);
}
is->audio_buf = is->audio_buf2;
// Number of samples per channel x Track number x Bytes per sample
resampled_data_size = len2 * is->audio_tgt_channels
* av_get_bytes_per_sample(is->audio_tgt_fmt);
}
else {
resampled_data_size = decoded_data_size;
is->audio_buf = is->audio_frame->data[0];
}
pts = is->audio_clock;
*pts_ptr = pts;
n = 2 * is->audio_st->codec->channels;
is->audio_clock += (double)resampled_data_size / (double)(n * is->audio_st->codec->sample_rate);
return resampled_data_size;
}
if (pkt->data)
av_free_packet(pkt);
if (is->quit)
return -1;
if (!mPlaying)
{
if (is->audio_buf != NULL)
{
is->audio_buf_size = 1024;
memset(is->audio_buf, 0, is->audio_buf_size);
}
SDL_Delay(10);
continue;
}
if (tools::packet_queue_get(&is->audioq, pkt, is->decodeState) < 0)
return -1;
if (is->audio_clock < 0)
{
connect;
}
is->audio_pkt_data = pkt->data;
is->audio_pkt_size = pkt->size;
if (pkt->pts != AV_NOPTS_VALUE)
{
is->audio_clock = av_q2d(is->audio_st->time_base)*pkt->pts;
}
}
}
void audio_callback(void *userdata, Uint8 *stream, int len) {
VideoState *is = (VideoState *)userdata;
int len1, audio_data_size = 0;
double pts;
double delay = 0;
while (len > 0)
{
if (global_video_state->quit)
{
break;
}
if (m_Release)
{
return;
}
if (is->audio_buf_index >= is->audio_buf_size) {
if (is->audioq.size > 0)
{
audio_data_size = audio_decode_frame(is, &pts);
}
fprintf(stderr, "audio_decode_frame failed\n", audio_data_size);
if (audio_data_size < 0) {
/* silence */
is->audio_buf_size = 1024;
memset(is->audio_buf, 0, is->audio_buf_size);
}
else {
is->audio_buf_size = audio_data_size;
}
is->audio_buf_index = 0;
}
else
{
len1 = is->audio_buf_size - is->audio_buf_index;
if (len1 > len) {
len1 = len;
}
memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1);
len -= len1;
stream += len1;
is->audio_buf_index += len1;
}
}
}
/* Turn on audio settings
*/
int stream_component_open(VideoState *is, unsigned int stream_index) {
global_video_state = is;
AVFormatContext *ic = global_video_state->pFormatCtx;
AVCodecContext *codecCtx;
SDL_AudioSpec wanted_spec, spec;
int64_t wanted_channel_layout = 0;
int wanted_nb_channels;
const int next_nb_channels[] = { 0, 0, 1, 6, 2, 6, 4, 6 };
if (stream_index < 0 || stream_index >= ic->nb_streams) {
return -1;
}
codecCtx = ic->streams[stream_index]->codec;
wanted_nb_channels = codecCtx->channels;
if (!wanted_channel_layout || wanted_nb_channels != av_get_channel_layout_nb_channels(wanted_channel_layout)) {
wanted_channel_layout = av_get_default_channel_layout(wanted_nb_channels);
wanted_channel_layout &= ~AV_CH_LAYOUT_STEREO_DOWNMIX;
}
wanted_spec.channels = av_get_channel_layout_nb_channels(wanted_channel_layout);
wanted_spec.freq = codecCtx->sample_rate;
if (wanted_spec.freq <= 0 || wanted_spec.channels <= 0) {
fprintf(stderr, "Invalid sample rate or channel count!\n");
return -1;
}
wanted_spec.format = AUDIO_S16SYS;
wanted_spec.silence = 0;
wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
wanted_spec.callback = audio_callback;
wanted_spec.userdata = is;
while (SDL_OpenAudio(&wanted_spec, &spec) < 0) {
fprintf(stderr, "SDL_OpenAudio (%d channels): %s\n", wanted_spec.channels, SDL_GetError());
wanted_spec.channels = next_nb_channels[FFMIN(7, wanted_spec.channels)];
if (!wanted_spec.channels) {
fprintf(stderr, "No more channel combinations to tyu, audio open failed\n");
return -1;
}
wanted_channel_layout = av_get_default_channel_layout(wanted_spec.channels);
}
if (spec.format != AUDIO_S16SYS) {
fprintf(stderr, "SDL advised audio format %d is not supported!\n", spec.format);
return -1;
}
if (spec.channels != wanted_spec.channels) {
wanted_channel_layout = av_get_default_channel_layout(spec.channels);
if (!wanted_channel_layout) {
fprintf(stderr, "SDL advised channel count %d is not supported!\n", spec.channels);
return -1;
}
}
is->audio_src_fmt = is->audio_tgt_fmt = AV_SAMPLE_FMT_S16;
is->audio_src_freq = is->audio_tgt_freq = spec.freq;
is->audio_src_channel_layout = is->audio_tgt_channel_layout = wanted_channel_layout;
is->audio_src_channels = is->audio_tgt_channels = spec.channels;
ic->streams[stream_index]->discard = AVDISCARD_DEFAULT;
switch (codecCtx->codec_type) {
case AVMEDIA_TYPE_AUDIO:
SDL_PauseAudio(0);
break;
default:
break;
}
return 0;
}
#pragma endregion
uint64_t global_video_pkt_pts = AV_NOPTS_VALUE;
double synchronize_video(VideoState *is, AVFrame *src_frame, double pts)
{
double frame_delay;
if (pts != 0)
{
is->video_clock = pts;
}
else
{
pts = is->video_clock;
}
is->video_dts = src_frame->pkt_dts;
frame_delay = av_q2d(is->video_st->codec->time_base);
frame_delay += src_frame->repeat_pict * (frame_delay * 0.5);
is->video_clock += frame_delay;
return pts;
}
void alloc_picture(void *userdata)
{
VideoState *is = (VideoState *)userdata;
VideoPicture *vp;
vp = &is->pictq[is->pictq_windex];
vp->width = is->video_st->codec->width;
vp->height = is->video_st->codec->height;
SDL_LockMutex(is->pictq_mutex);
vp->allocated = 1;
SDL_CondSignal(is->pictq_cond);
SDL_UnlockMutex(is->pictq_mutex);
}
static void display_picture(AVPacket *packet, AVFrame* pFrame)
{
if (global_video_state->quit)
{
return;
}
if (global_picture_handle == NULL)
{
return;
}
AVCodecContext * pCodecCtx = global_video_state->video_st->codec;
AVFrame *pFrameYUV = global_picture_handle->pFrameYUV;
global_picture_handle->pFrame = pFrame;// Use of Screenshot
static struct SwsContext *img_convert_ctx;
if (img_convert_ctx == NULL)
{
img_convert_ctx = sws_getContext(global_video_state->video_st->codec->width, global_video_state->video_st->codec->height,
global_video_state->video_st->codec->pix_fmt,
global_video_state->video_st->codec->width, global_video_state->video_st->codec->height,
PIX_FMT_YUV420P,
SWS_BICUBIC, NULL, NULL, NULL);
if (img_convert_ctx == NULL)
{
fprintf(stderr, "Cannot initialize the conversion context/n");
exit(1);
}
}
if (pFrame == NULL || pFrameYUV == NULL)
{
return;
}
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height,
pFrameYUV->data, pFrameYUV->linesize);
if (pFrame == NULL || pFrameYUV == NULL)
{
return;
}
SDL_UpdateYUVTexture(global_picture_handle->sdlTexture, &global_picture_handle->srcRect,
pFrameYUV->data[0], pFrameYUV->linesize[0],
pFrameYUV->data[1], pFrameYUV->linesize[1],
pFrameYUV->data[2], pFrameYUV->linesize[2]);
SDL_RenderClear(global_picture_handle->sdlRenderer);
SDL_RenderCopy(global_picture_handle->sdlRenderer, global_picture_handle->sdlTexture, &global_picture_handle->srcRect, &global_picture_handle->sdlRect);
SDL_RenderPresent(global_picture_handle->sdlRenderer);
//SDL End-----------------------
}
double firt_subtitle_last_pts = 0;
double firt_subtitle_delay = 0;
void display_subtitle(VideoState *is)
{
AVPacket pkt1, *packet = &pkt1;
int frameFinished = 0;
if (global_video_state->quit)
{
return;
}
double actual_delay, delay, sync_threshold, ref_clock, diff;
delay = is->frame_last_pts - firt_subtitle_last_pts;
bool drag = false;// Prevent the dragging time from being out of sync
if (delay <= 0 || delay >= 1.0)
{
delay = is->frame_last_delay;
drag = true;
}
if ((is->frame_last_pts <firt_subtitle_last_pts + firt_subtitle_delay) && !drag)
{
return;
}
if (tools::packet_queue_get(&is->subtitleq, packet, 0) > 0)
{
int len1 = avcodec_decode_subtitle2(is->subtitle_st->codec, is->pSubtitle, &frameFinished, (AVPacket*)packet);
firt_subtitle_last_pts = is->frame_last_pts;
double delay = is->pSubtitle->end_display_time - is->pSubtitle->start_display_time;
firt_subtitle_delay = delay / 1000.0f;
if (frameFinished > 0)
{
frameCallBack(packet);
}
}
}
int queue_picture(VideoState *is, AVFrame *pFrame, double pts, AVPacket* pkt)
{
VideoPicture *vp;
SDL_LockMutex(is->pictq_mutex);
while (is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE &&
!is->quit)
{
SDL_CondWait(is->pictq_cond, is->pictq_mutex);
}
SDL_UnlockMutex(is->pictq_mutex);
if (global_video_state->quit)
return -1;
vp = &is->pictq[is->pictq_windex];
vp->pkt = pkt;
vp->pFrame = pFrame;
display_picture(pkt, pFrame);
// Callback playback progress
if (frameCallBack != NULL)
{
frameCallBack(pkt);
}
vp->pts = pts;
if (++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE)
{
is->pictq_windex = 0;
}
SDL_LockMutex(is->pictq_mutex);
is->pictq_size++;
SDL_UnlockMutex(is->pictq_mutex);
return 0;
}
int video_thread(void *arg)
{
VideoState *is = (VideoState *)arg;
AVPacket pkt1, *packet = &pkt1;
int len1, frameFinished;
AVFrame *pFrame;
double pts;
#if Debug
pFrame = avcodec_alloc_frame();
#else
pFrame = av_frame_alloc();
#endif
for (;;)
{
if (global_video_state->quit)
{
break;
}
if (m_Release)
{
break;
}
if (!mPlaying)
{
SDL_Delay(10);
continue;
}
if (tools::packet_queue_get(&is->videoq, packet, is->decodeState) < 0)
{
break;
}
pts = 0;
global_video_pkt_pts = packet->pts;
len1 = avcodec_decode_video2(is->video_st->codec, pFrame, &frameFinished, packet);
if (packet->dts == AV_NOPTS_VALUE
&& pFrame->opaque && *(uint64_t*)pFrame->opaque != AV_NOPTS_VALUE)
{
pts = *(uint64_t *)pFrame->opaque;
}
else if (packet->dts != AV_NOPTS_VALUE)
{
pts = packet->dts;
}
else
{
pts = 0;
}
pts *= av_q2d(is->video_st->time_base);
if (frameFinished)
{
pts = synchronize_video(is, pFrame, pts);
if (queue_picture(is, pFrame, pts, packet) < 0)
{
break;
}
}
av_free_packet(packet);
}
av_free(pFrame);
return 0;
}
static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque)
{
SDL_Event event;
event.type = FF_REFRESH_EVENT;
event.user.data1 = opaque;
SDL_PushEvent(&event);
return 0;
}
SDL_TimerID _timeId = 0;
static void schedule_refresh(VideoState *is, int delay)
{
_timeId = SDL_AddTimer(delay, sdl_refresh_timer_cb, is);
}
static void video_refresh_timer(void *userdata)
{
VideoState *is = (VideoState *)userdata;
VideoPicture *vp;
double actual_delay, delay, sync_threshold, ref_clock, diff;
if (is->pFormatCtx == NULL)
{
return;
}
if (is->video_st == NULL)
{
return;
}
if (global_video_state->quit)
{
if (_timeId != 0)
{
SDL_RemoveTimer(_timeId);
}
return;
}
double allTime = is->pFormatCtx->duration* av_q2d(is->video_st->time_base);
int64_t pos = global_video_pkt_pts* av_q2d(is->video_st->codec->pkt_timebase);
int64_t timestamp = is->pFormatCtx->duration / AV_TIME_BASE;
bool end = pos >= timestamp;
if (end)
{
// Parsing completion callback
if (frameEndCallBack != NULL)
{
printf(" acount: %d vcount: %d atime:%f vtime: %f\n",
global_video_state->audioq.nb_packets, global_video_state->videoq.nb_packets,
global_video_state->audio_clock, global_video_state->video_clock);
frameEndCallBack();
}
}
if (is->video_st) {
if (is->pictq_size == 0) {
schedule_refresh(is, 10);
}
else {
vp = &is->pictq[is->pictq_rindex];
delay = vp->pts - is->frame_last_pts;
if (delay <= 0 || delay >= 1.0)
{
delay = is->frame_last_delay;
}
is->frame_last_delay = delay;
is->frame_last_pts = vp->pts;
ref_clock = tools::get_audio_clock(is);
diff = vp->pts - ref_clock;
sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD;
if (fabs(diff) < AV_NOSYNC_THRESHOLD)
{
if (diff <= -sync_threshold)
{
delay = 0;
}
else if (diff >= sync_threshold)
{
delay = 2 * delay;
}
}
is->frame_timer += delay;
actual_delay = is->frame_timer - (tools::av_gettime() / 1000000.0);
if (actual_delay < 0.010)
{
actual_delay = 0.010;
}
schedule_refresh(is, (int)(actual_delay * 1000 + 0.5));
display_subtitle(is);
if (++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE)
{
is->pictq_rindex = 0;
}
SDL_LockMutex(is->pictq_mutex);
is->pictq_size--;
SDL_CondSignal(is->pictq_cond);
SDL_UnlockMutex(is->pictq_mutex);
}
}
else {
schedule_refresh(is, 100);
}
}
int getCodeByType(AVMediaType type, AVCodecContext**pCodecCtx, AVCodec**pCodec)
{
int index = -1;
for (int i = 0; i < global_video_state->pFormatCtx->nb_streams; i++)
if (global_video_state->pFormatCtx->streams[i]->codec->codec_type == type){
index = i;
break;
}
if (index == -1)
{
return -1;
}
*pCodecCtx = global_video_state->pFormatCtx->streams[index]->codec;
*pCodec = avcodec_find_decoder((*pCodecCtx)->codec_id);
if (pCodec == NULL){
printf("Codec not found.\n");
return -1;
}
// Turn on the decoder
if (avcodec_open2(*pCodecCtx, *pCodec, NULL) < 0){
printf("Could not open codec.\n");
return -1;
}
if (type == AVMEDIA_TYPE_VIDEO)
{
global_video_state->video_st = global_video_state->pFormatCtx->streams[index];
global_video_state->videoStream = index;
tools::packet_queue_init(&global_video_state->videoq);
tools::packet_queue_flush(&global_video_state->videoq);
global_video_state->frame_timer = (double)tools::av_gettime() / 1000000.0;
global_video_state->frame_last_delay = 40e-3;
int width = global_video_state->video_st->codec->width;
int height = global_video_state->video_st->codec->height;
AVPixelFormat pix_fmt = global_video_state->video_st->codec->pix_fmt;
global_picture_handle = (PictureHandle *)av_malloc(sizeof(PictureHandle));
global_picture_handle->pFrame = av_frame_alloc();
global_picture_handle->pFrameYUV = av_frame_alloc();
uint8_t *out_buffer = (uint8_t *)av_malloc(avpicture_get_size(PIX_FMT_YUV420P, width, height));
avpicture_fill((AVPicture *)global_picture_handle->pFrameYUV, out_buffer, PIX_FMT_YUV420P, width, height);
}
if (type == AVMEDIA_TYPE_AUDIO)
{
global_video_state->audioStream = index;
global_video_state->audio_st = global_video_state->pFormatCtx->streams[index];
global_video_state->audio_buf_size = 0;
global_video_state->audio_buf_index = 0;
memset(&global_video_state->audio_pkt, 0, sizeof(global_video_state->audio_pkt));
tools::packet_queue_init(&global_video_state->audioq);
tools::packet_queue_flush(&global_video_state->audioq);
}
if (type == AVMEDIA_TYPE_SUBTITLE)
{
global_video_state->subtitle_st = global_video_state->pFormatCtx->streams[index];
global_video_state->subtitleStream = index;
global_video_state->pSubtitle = (AVSubtitle *)av_malloc(sizeof(AVSubtitle));
tools::packet_queue_init(&global_video_state->subtitleq);
tools::packet_queue_flush(&global_video_state->subtitleq);
}
return index;
}
static int decode_thread(void *arg)
{
VideoState *is = (VideoState *)arg;
int ret = 0;
AVPacket *packet = (AVPacket *)av_malloc(sizeof(AVPacket));
av_init_packet(packet);
while (true){
if (global_video_state->quit)
break;
if (is->audioq.size > MAX_AUDIOQ_SIZE ||
is->videoq.size > MAX_VIDEOQ_SIZE) {
SDL_Delay(10);
continue;
}
if (!mPlaying)
{
SDL_Delay(10);
continue;
}
ret = av_read_frame(is->pFormatCtx, packet);
if (ret < 0) {
if (ret == AVERROR_EOF || url_feof(is->pFormatCtx->pb)) {
printf(" acount: %d vcount: %d atime:%f vtime: %f\n",
is->audioq.nb_packets, is->videoq.nb_packets,
is->audio_clock, is->video_clock);
is->decodeState = 0;
break;
}
if (is->pFormatCtx->pb && is->pFormatCtx->pb->error) {
break;
}
continue;
}
if (packet->stream_index == is->videoStream)
{
tools::packet_queue_put(&is->videoq, (AVPacket*)packet);
}
if (packet->stream_index == is->audioStream)
{
tools::packet_queue_put(&is->audioq, (AVPacket*)packet);
}
if (packet->stream_index == is->subtitleStream)
{
tools::packet_queue_put(&is->subtitleq, (AVPacket*)packet);
}
}
printf("decode_thread finish");
while (!is->quit)
{
SDL_Delay(10);
}
return 0;
}
LatLng lastGps;
static int analysis_thread(void *arg)
{
SDL_Delay(500);
VideoState *is = (VideoState *)arg;
int ret = 0;
int frameFinished = 0;
AVPacket *packet = (AVPacket *)av_malloc(sizeof(AVPacket));
av_init_packet(packet);
lastGps = LatLng();
AVSubtitle *pSubtitle = (AVSubtitle *)av_malloc(sizeof(AVSubtitle));
while (true){
if (global_video_state->quit)
break;
if (m_Release)
{
break;
}
ret = av_read_frame(is->pFormatCtx, packet);
if (ret < 0) {
if (ret == AVERROR_EOF || url_feof(is->pFormatCtx->pb)) {
printf(" acount: %d vcount: %d atime:%f vtime: %f\n",
is->audioq.nb_packets, is->videoq.nb_packets,
is->audio_clock, is->video_clock);
break;
}
if (is->pFormatCtx->pb && is->pFormatCtx->pb->error) {
break;
}
continue;
}
if (is->quit)
break;
if (packet->stream_index == is->subtitleStream)
{
int len1 = avcodec_decode_subtitle2(is->subtitle_st->codec, pSubtitle, &frameFinished, (AVPacket*)packet);
if (len1 < 0)
{
continue;
}
if (frameFinished>0)
{
AVSubtitleRect* rect = *(pSubtitle->rects);
char * title = rect->ass;
std::vector<std::string> list = tools::split(title, ",");
string Time = "";
if (list.size() > 1)
{
Time = list.at(1);
}
string Lat = "0";
if (list.size() > 7)
{
Lat = list.at(7);
}
string Log = "0";
if (list.size() > 9)
{
Log = list.at(9);
}
string Speed = "";
if (list.size() > 11)
{
Speed = list.at(11);
}
string Date = "";
if (list.size() > 12)
{
Date = list.at(12);
}
double log = atof(Log.c_str());
double lat = atof(Lat.c_str());
if (log == 0 || lat == 0)
{
continue;
}
if (lastGps.lat != 0 && lastGps.lng != 0)
{
double tmpdistance1 = mapUtils::getDisance(lat, log, lastGps.lat, lastGps.lng);
double speed1 = atof(Speed.c_str());
speed1 = speed1*1.852;
double time1 = 10.0 / 100;
double speed2 = (tmpdistance1 / time1)* 3.6;
// Drift occurs
if (speed2 > speed1)
{
continue;
}
}
lastGps = LatLng();
lastGps.lat = lat;
lastGps.lng = log;
LatLng latlnt = mapUtils::gpsToBaidu(atof(Log.c_str()), atof(Lat.c_str()));
MapInfo mapInfo = MapInfo();
mapInfo.lat = latlnt.lat;
mapInfo.lng = latlnt.lng;
is->gpsList.push_back(mapInfo);
}
}
}
av_seek_frame(is->pFormatCtx, is->videoStream, 0, AVSEEK_FLAG_BACKWARD);
printf("analysis_thread finish");
// Parsing complete
if (analysisGpsEndCallBack != NULL)
{
analysisGpsEndCallBack();
}
return 0;
}
ZONGTANGDLL_API void initSDK(VideoState** p)
{
global_video_state = (VideoState *)av_mallocz(sizeof(VideoState));
// Initialize parameters
global_video_state->videoStream = -1;
global_video_state->audioStream = -1;
global_video_state->subtitleStream = -1;
/*global_video_state->quit = 0;*/
*p = global_video_state;
// Register all available file formats and encoders in the Library
av_register_all();
//avformat_network_init();
// Assign decoding context
global_video_state->pFormatCtx = avformat_alloc_context();
m_Release = FALSE;
}
ZONGTANGDLL_API int openFile(char filepath[], int64_t& duration)
{
// Open the input file
if (avformat_open_input(&global_video_state->pFormatCtx, filepath, NULL, NULL) != 0){
printf("Couldn't open input stream.\n");
return -1;
}
// Take out the stream information
if (avformat_find_stream_info(global_video_state->pFormatCtx, NULL) < 0){
printf("Couldn't find stream information.\n");
return -1;
}
// List the relevant flow information of the input file
av_dump_format(global_video_state->pFormatCtx, 0, filepath, 0);
if (global_video_state->pFormatCtx->duration != AV_NOPTS_VALUE){
int hours, mins, secs, us;
duration = global_video_state->pFormatCtx->duration;
secs = duration / AV_TIME_BASE;
us = duration % AV_TIME_BASE;
mins = secs / 60;
secs %= 60;
hours = mins / 60;
mins %= 60;
printf("%02d:%02d:%02d.%02d\n", hours, mins, secs, (100 * us) / AV_TIME_BASE);
}
return 0;
}
ZONGTANGDLL_API int setFrameCallback(FrameCallBack _callback)
{
frameCallBack = _callback;
return 0;
}
ZONGTANGDLL_API int setFrameEndCallback(FrameEndCallBack _callback)
{
frameEndCallBack = _callback;
return 0;
}
ZONGTANGDLL_API int initCodec()
{
AVCodecContext *pCodecCtx, *sCodecCtx, *aCodecCtx;
AVCodec *pCodec, *sCodec, *aCodec;
int videoindex, subtitleindex, audioindex;
videoindex = getCodeByType(AVMEDIA_TYPE_VIDEO, &pCodecCtx, &pCodec);
if (videoindex == -1){
printf("Couldn't getCodeByType.\n");
return -1;
}
subtitleindex = getCodeByType(AVMEDIA_TYPE_SUBTITLE, &sCodecCtx, &sCodec);
audioindex = getCodeByType(AVMEDIA_TYPE_AUDIO, &aCodecCtx, &aCodec);
if (audioindex >= 0) {
stream_component_open(global_video_state, audioindex);
}
return 0;
}
ZONGTANGDLL_API int setWindownHandle(HWND handle)
{
if (global_picture_handle == NULL)
{
return -1;
}
if (global_video_state->video_st == NULL)
{
return -1;
}
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
printf("Could not initialize SDL - %s\n", SDL_GetError());
return -1;
}
SDL_Window *screen;
screen = SDL_CreateWindowFrom((void *)(handle));
if (!screen) {
printf("SDL: could not create window - exiting:%s\n", SDL_GetError());
return -1;
}
int iWidth = 0;
int iHeight = 0;
SDL_GetWindowSize(screen, &iWidth, &iHeight);
global_picture_handle->sdlRenderer = SDL_CreateRenderer(screen, -1, 0);
//IYUV: Y + U + V (3 planes)
//YV12: Y + V + U (3 planes)
int width = global_video_state->video_st->codec->width;
int height = global_video_state->video_st->codec->height;
global_picture_handle->sdlTexture = SDL_CreateTexture(global_picture_handle->sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, width, height);
global_picture_handle->sdlRect.x = 0;
global_picture_handle->sdlRect.y = 0;
global_picture_handle->sdlRect.w = iWidth;
global_picture_handle->sdlRect.h = iHeight;
global_picture_handle->srcRect.x = 0;
global_picture_handle->srcRect.y = 0;
global_picture_handle->srcRect.w = width;
global_picture_handle->srcRect.h = height;
return 0;
}
// Refresh threads , Also called daemon thread
static int refresh_thread(void *arg)
{
SDL_Event event;
for (;;)
{
if (global_video_state == NULL)
{
break;
}
if (global_video_state->quit)
{
break;
}
SDL_WaitEvent(&event);
switch (event.type) {
case FF_QUIT_EVENT:
case SDL_QUIT:
global_video_state->quit = 1;
break;
case FF_ALLOC_EVENT:
//alloc_picture(event.user.data1);
break;
case FF_REFRESH_EVENT:
video_refresh_timer(event.user.data1);
break;
default:
break;
}
}
return 0;
}
ZONGTANGDLL_API int play()
{
mPlaying = true;
global_video_state->quit = 0;
global_video_state->pictq_mutex = SDL_CreateMutex();
global_video_state->pictq_cond = SDL_CreateCond();
schedule_refresh(global_video_state, 40);
global_video_state->decodeState = 1;// Reset the decoding status to 1
global_video_state->parse_tid = SDL_CreateThread(decode_thread, "myThread", global_video_state);
if (!global_video_state->parse_tid)
{
global_video_state->quit = 1;
av_free(global_video_state);
return -1;
}
global_video_state->video_tid = SDL_CreateThread(video_thread, "videoThread", global_video_state);
if (!global_video_state->video_tid)
{
global_video_state->quit = 1;
av_free(global_video_state);
return -1;
}
SDL_Thread *refresh_tid;
refresh_tid = SDL_CreateThread(refresh_thread, "myRefreshThread", NULL);
if (!refresh_tid)
{
global_video_state->quit = 1;
av_free(global_video_state);
return -1;
}
return 0;
}
ZONGTANGDLL_API int seek(int64_t timestamp)
{
int ret = 0;
//ret = pause(true);
tools::packet_queue_flush(&global_video_state->audioq);
tools::packet_queue_flush(&global_video_state->videoq);
tools::packet_queue_flush(&global_video_state->subtitleq);
if (ret < 0)
{
return -1;
}
if (av_seek_frame(global_video_state->pFormatCtx, global_video_state->videoStream, timestamp, AVSEEK_FLAG_BACKWARD) < 0){
printf("Could not seek_frame.\n");
return -1;
}
/*ret = pause(false);
if (ret < 0)
{
return -1;
}*/
return 0;
}
//true For suspension ,false To continue
ZONGTANGDLL_API int pause(bool enable)
{
mPlaying = !enable;
if (global_video_state == NULL)
{
return -1;
}
if (global_video_state->quit)
{
return -1;
}
SDL_AudioStatus status = SDL_GetAudioStatus();
if (enable)
{
PacketQueue q = global_video_state->audioq;
SDL_CondSignal(q.cond);
SDL_PauseAudio(1);
printf("pause SDL_PauseAudio\n");
}
else
{
PacketQueue q = global_video_state->audioq;
/*SDL_PauseAudio(SDL_AUDIO_PLAYING);*/
SDL_CondSignal(q.cond);
SDL_PauseAudio(0);
printf("pause SDL_AUDIO_PLAYING\n");
}
return 0;
}
ZONGTANGDLL_API bool getState()
{
return mPlaying;
}
ZONGTANGDLL_API int desSDK()
{
try{
if (global_video_state == NULL)
{
return -1;
}
global_video_state->quit = 1;
m_Release = TRUE;
PacketQueue q = global_video_state->audioq;
SDL_CondSignal(q.cond);
SDL_CloseAudio();
SDL_Delay(500);
if (global_picture_handle != NULL)
{
av_frame_free(&global_picture_handle->pFrameYUV);
global_picture_handle->sdlRenderer = NULL;
global_picture_handle->pSubtitle = NULL;
}
if (global_video_state != NULL)
{
tools::packet_queue_flush(&global_video_state->audioq);
tools::packet_queue_flush(&global_video_state->videoq);
tools::packet_queue_flush(&global_video_state->subtitleq);
if (global_video_state->video_st != NULL)
{
avcodec_close(global_video_state->video_st->codec);
global_video_state->video_st = NULL;
}
if (global_video_state->audio_st != NULL)
{
avcodec_close(global_video_state->audio_st->codec);
global_video_state->audio_st = NULL;
}
if (global_video_state->subtitle_st != NULL)
{
avcodec_close(global_video_state->subtitle_st->codec);
global_video_state->subtitle_st = NULL;
}
avformat_close_input(&global_video_state->pFormatCtx);
}
}
catch (...)
{
printf("Exception : \n");
return -1;
}
return 0;
}
ZONGTANGDLL_API int setVolumeEnable(bool enable)
{
return 0;
}
ZONGTANGDLL_API int setVolume(double value)
{
return 0;
}
ZONGTANGDLL_API int analysisGps()
{
if (global_video_state == NULL)
{
return -1;
}
global_video_state->gpsList.clear();
global_video_state->parse_tid = SDL_CreateThread(analysis_thread, "myAnalysisThread", global_video_state);
if (!global_video_state->parse_tid)
{
global_video_state->quit = 1;
av_free(global_video_state);
return -1;
}
SDL_Delay(100);
return 0;
}
ZONGTANGDLL_API int setAnalysisGpsEndCallback(AnalysisGpsEndCallBack _callback)
{
analysisGpsEndCallBack = _callback;
return 0;
}
int WriteJPEG(AVFrame* pFrame, int width, int height, int iIndex, char**filename, LPTSTR Dir)
{
// Output file path
char out_file[MAX_PATH] = { 0 };
char acInputFileName[200] = { '\0' };
WideCharToMultiByte(CP_OEMCP, NULL, Dir, -1, acInputFileName, 200, NULL, FALSE);
int val = sprintf_s(out_file, sizeof(out_file), "%s\\%d.jpg", acInputFileName, iIndex);
// Distribute AVFormatContext object
AVFormatContext* pFormatCtx = avformat_alloc_context();
// Set the output file format
pFormatCtx->oformat = av_guess_format("mjpeg", NULL, NULL);
// Create and initialize a and the url dependent AVIOContext
if (avio_open(&pFormatCtx->pb, out_file, AVIO_FLAG_READ_WRITE) < 0) {
printf("Couldn't open output file.");
return -1;
}
// Build a new stream
AVStream* pAVStream = avformat_new_stream(pFormatCtx, 0);
if (pAVStream == NULL) {
return -1;
}
// Set the stream Information about
AVCodecContext* pCodecCtx = pAVStream->codec;
pCodecCtx->codec_id = pFormatCtx->oformat->video_codec;
pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtx->pix_fmt = PIX_FMT_YUVJ420P;
pCodecCtx->width = width;
pCodecCtx->height = height;
pCodecCtx->time_base.num = 1;
pCodecCtx->time_base.den = 25;
av_dump_format(pFormatCtx, 0, out_file, 1);
// Find the decoder
AVCodec* pCodec = avcodec_find_encoder(pCodecCtx->codec_id);
if (!pCodec) {
printf("Codec not found.");
return -1;
}
// Set up pCodecCtx The decoder is pCodec
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
printf("Could not open codec.");
return -1;
}
//Write Header
avformat_write_header(pFormatCtx, NULL);
int y_size = pCodecCtx->width * pCodecCtx->height;
// to AVPacket Allocate enough space
AVPacket pkt;
av_new_packet(&pkt, y_size * 3);
//
int got_picture = 0;
int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);
if (ret < 0) {
printf("Encode Error.\n");
return -1;
}
if (got_picture == 1) {
ret = av_write_frame(pFormatCtx, &pkt);
}
av_free_packet(&pkt);
//Write Trailer
av_write_trailer(pFormatCtx);
printf("Encode Successful.\n");
if (pAVStream) {
avcodec_close(pAVStream->codec);
}
avio_close(pFormatCtx->pb);
avformat_free_context(pFormatCtx);
char* res = new char[strlen(out_file) + 1];
strcpy_s(res, strlen(out_file) + 1, out_file);
*filename = res;
return 0;
}
ZONGTANGDLL_API int saveFrame(char** filename, LPTSTR Dir)
{
try{
if (global_picture_handle == NULL)
{
return -1;
}
if (global_video_state == NULL)
{
return -1;
}
if (global_video_state->video_st == NULL)
{
return -1;
}
AVFrame *pFrame = global_picture_handle->pFrame;
AVCodecContext *pCodecCtx = global_video_state->video_st->codec;
int width = pCodecCtx->width;
int height = pCodecCtx->height;
int index = global_video_state->video_dts;
WriteJPEG(pFrame, width, height, index, filename, Dir);
}
catch (...)
{
printf("Exception : \n");
return -1;
}
return 0;
}
CPP rely on
Opencv 3.4 Above version
Project address :
no
All subsequent codes will be opened to git Code management , If you need all the code to learn , You can also contact me , Phone number 18824182332( Wechat same number ).
边栏推荐
- Wet- a good choice for people with English difficulties - console translation
- 谷歌分析UA怎么转最新版GA4最方便
- Usage of in in SQL DQL query
- synchronized与volatile
- TFrecord写入与读取
- Why does redisv6.0 introduce multithreading?
- Redis foundation 2 (notes)
- Leetcode 106. construct binary tree from middle order and post order traversal sequence
- kubernetes之VictoriaMetrics单节点
- D3.js learning
猜你喜欢

Arcgis10.2 configuring postgresql9.2 standard tutorial

Wet- a good choice for people with English difficulties - console translation

访问者模式(visitor)模式

Tfrecord write and read

3dslicer import cone beam CT image

How to resolve a domain name to multiple IP addresses?

6-18 vulnerability exploitation - backdoor connection

xxl-job中 关于所有日志系统的源码的解读(一行一行源码解读)

Advanced database · how to add random data for data that are not in all user data - Dragonfly Q system users without avatars how to add avatar data - elegant grass technology KIR

3dslicer importing medical image data
随机推荐
C language: random generated number + selective sorting
『Skywalking』.NET Core快速接入分布式链路追踪平台
突破性思维在测试工作中的应用
Usage of in in SQL DQL query
Smart S7-200 PLC channel free mapping function block (do_map)
Ts:typera code fragment indentation display exception (resolved) -2022.7.24
H5幸运刮刮乐抽奖 免公众号+直运营
Wet- a good choice for people with English difficulties - console translation
xxl-job中 关于所有日志系统的源码的解读(一行一行源码解读)
如何将一个域名解析到多个IP地址?
Jenkins+svn configuration
The second short contact of gamecloud 1608
What is redis? Briefly describe its advantages and disadvantages
What is partition and barrel division?
All you want to know about interface testing is here
Some summary about function
The testing work is not valued. Have you changed your position?
Application of breakthrough thinking in testing work
谷歌分析UA怎么转最新版GA4最方便
[test development methodology] experience of test development platform PK - choice