资讯专栏INFORMATION COLUMN

android ijkplayer c层分析-prepare过程与读取线程(续1-解码粗略分析)

zhonghanwen / 2605人阅读

摘要:分别为音频视频和字母进行相关处理。向下跟踪两层,会发现,核心函数是。至此解码算完了。整个过程真是粗略分析啊,对自己也很抱歉,暂时先这样吧。

上文中说到在read_thread线程中有个关键函数:avformat_open_input(utils.c),应当是读取视频文件的,这个函数属于ffmpeg层。这回进入到其中去看下:

int avformat_open_input(AVFormatContext **ps, const char *filename,
                        AVInputFormat *fmt, AVDictionary **options)
{
    ......
    if ((ret = init_input(s, filename, &tmp)) < 0)
        goto fail;
        s->probe_score = ret;

    if (!s->protocol_whitelist && s->pb && s->pb->protocol_whitelist) {
        s->protocol_whitelist = av_strdup(s->pb->protocol_whitelist);
        if (!s->protocol_whitelist) {
            ret = AVERROR(ENOMEM);
            goto fail;
        }
    }

    if (!s->protocol_blacklist && s->pb && s->pb->protocol_blacklist) {
        s->protocol_blacklist = av_strdup(s->pb->protocol_blacklist);
        if (!s->protocol_blacklist) {
            ret = AVERROR(ENOMEM);
            goto fail;
        }
    }

    if (s->format_whitelist && av_match_list(s->iformat->name, s->format_whitelist, ",") <= 0) {
        av_log(s, AV_LOG_ERROR, "Format not on whitelist "%s"
", s->format_whitelist);
        ret = AVERROR(EINVAL);
        goto fail;
    }
    ......
}    

init_input这个函数的注释是:Open input file and probe the format if necessary.打开文件判断格式。进入到这个函数中观察,代码不多,关键点av_probe_input_buffer2,ffmpeg里的格式分析函数。里面会调用avio_read,读取文件。进入到avio_read函数内部,看到主要是循环读取packet,从AVIOContext的队列中读取,通过AVIOContext中的函数指针read_packet来进行。

下面退回到read_thread函数中,看看对h264的视频流如何处理的。

static int read_thread(void *arg)
{
    ......
    for (i = 0; i < ic->nb_streams; i++) {
        AVStream *st = ic->streams[i];
        enum AVMediaType type = st->codecpar->codec_type;
        st->discard = AVDISCARD_ALL;
        if (type >= 0 && ffp->wanted_stream_spec[type] && st_index[type] == -1)
            if (avformat_match_stream_specifier(ic, st, ffp->wanted_stream_spec[type]) > 0)
                st_index[type] = i;

        // choose first h264

        if (type == AVMEDIA_TYPE_VIDEO) {
            enum AVCodecID codec_id = st->codecpar->codec_id;
            video_stream_count++;
            if (codec_id == AV_CODEC_ID_H264) {
                h264_stream_count++;
                if (first_h264_stream < 0)
                    first_h264_stream = i;
            }
        }
    }
    ......
}

循环读取stream,然后判断是h264后记录下来。继续往下看又到了上文提到的stream_component_open函数,这回进入看看:

static int stream_component_open(FFPlayer *ffp, int stream_index)
{
    ......
    codec = avcodec_find_decoder(avctx->codec_id);
    ......
    switch (avctx->codec_type) {
    case AVMEDIA_TYPE_AUDIO:
    if ((ret = decoder_start(&is->auddec, audio_thread, ffp, "ff_audio_dec")) < 0)
            goto out;
    ......
    case AVMEDIA_TYPE_VIDEO:
    if ((ret = decoder_start(&is->viddec, video_thread, ffp, "ff_video_dec")) < 0)
            goto out;
    ......
    case AVMEDIA_TYPE_SUBTITLE:
    ......
}

这是个很长的函数,我看起来是分为2部分,前半部分是寻找解码器,后半部分的switch case是开始进行解码。分别为音频、视频和字母进行相关处理。decoder_start是个关键函数,开始进行解码处理。进入这个函数内部看:

static int decoder_start(Decoder *d, int (*fn)(void *), void *arg, const char *name)
{
    packet_queue_start(d->queue);
    d->decoder_tid = SDL_CreateThreadEx(&d->_decoder_tid, fn, arg, name);
    if (!d->decoder_tid) {
        av_log(NULL, AV_LOG_ERROR, "SDL_CreateThread(): %s
", SDL_GetError());
        return AVERROR(ENOMEM);
    }
    return 0;
}

很短,不是吗,但是启动了线程。根据参数传递得知就是上面传递进来的video_thread和audio_thread是关键线程函数。看到这里可以了解,解码过程是完全异步的,那么再去看看关键的线程函数吧。

static int video_thread(void *arg)
{
    FFPlayer *ffp = (FFPlayer *)arg;
    int       ret = 0;

    if (ffp->node_vdec) {
        ret = ffpipenode_run_sync(ffp->node_vdec);
    }
    return ret;
}

很短,先看参数,从上面的可得知,是FFPlayer类型,从read_thread函数中就传递进来的一个结构,可以说是播放器的结构,播放所需要的所有内容这里几乎都有了。继续看关键函数ffpipenode_run_sync。向下跟踪两层,会发现,核心函数是ffplay_video_thread。这个函数内部看起来挺麻烦,进入看看吧:

static int ffplay_video_thread(void *arg)
{
    ......
    for (;;) {
        ret = get_video_frame(ffp, frame);
        ......
        ret = av_buffersrc_add_frame(filt_in, frame);
        if (ret < 0)
            goto the_end;
        
        while (ret >= 0) {
            is->frame_last_returned_time = av_gettime_relative() / 1000000.0;

            ret = av_buffersink_get_frame_flags(filt_out, frame, 0);
            if (ret < 0) {
                if (ret == AVERROR_EOF)
                    is->viddec.finished = is->viddec.pkt_serial;
                ret = 0;
                break;
            }

            is->frame_last_filter_delay = av_gettime_relative() / 1000000.0 - is->frame_last_returned_time;
            if (fabs(is->frame_last_filter_delay) > AV_NOSYNC_THRESHOLD / 10.0)
                is->frame_last_filter_delay = 0;
            tb = filt_out->inputs[0]->time_base;
#endif
            duration = (frame_rate.num && frame_rate.den ? av_q2d((AVRational){frame_rate.den, frame_rate.num}) : 0);
            pts = (frame->pts == AV_NOPTS_VALUE) ? NAN : frame->pts * av_q2d(tb);
            ret = queue_picture(ffp, frame, pts, duration, av_frame_get_pkt_pos(frame), is->viddec.pkt_serial);
            av_frame_unref(frame);
#if CONFIG_AVFILTER
        }
#endif
    }
}

关键点首先是get_video_frame,然后是av_buffersrc_add_frame和后面的while循环部分里的queue_picture。那么不得不进入到get_video_frame中看下:

static int get_video_frame(FFPlayer *ffp, AVFrame *frame)
{
    VideoState *is = ffp->is;
    int got_picture;

    ffp_video_statistic_l(ffp);
    if ((got_picture = decoder_decode_frame(ffp, &is->viddec, frame, NULL)) < 0)
        return -1;

    if (got_picture) {
        double dpts = NAN;

        if (frame->pts != AV_NOPTS_VALUE)
            dpts = av_q2d(is->video_st->time_base) * frame->pts;

        frame->sample_aspect_ratio = av_guess_sample_aspect_ratio(is->ic, is->video_st, frame);

        if (ffp->framedrop>0 || (ffp->framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) {
            if (frame->pts != AV_NOPTS_VALUE) {
                double diff = dpts - get_master_clock(is);
                if (!isnan(diff) && fabs(diff) < AV_NOSYNC_THRESHOLD &&
                    diff - is->frame_last_filter_delay < 0 &&
                    is->viddec.pkt_serial == is->vidclk.serial &&
                    is->videoq.nb_packets) {
                    is->frame_drops_early++;
                    is->continuous_frame_drops_early++;
                    if (is->continuous_frame_drops_early > ffp->framedrop) {
                        is->continuous_frame_drops_early = 0;
                    } else {
                        av_frame_unref(frame);
                        got_picture = 0;
                    }
                }
            }
        }
    }

    return got_picture;
}

关键点只有一个就是decoder_decode_frame,好吧,继续往下看,层级有点多了哈:

static int decoder_decode_frame(FFPlayer *ffp, Decoder *d, AVFrame *frame, AVSubtitle *sub) {
    int got_frame = 0;

    do {
        int ret = -1;

        if (d->queue->abort_request)
            return -1;

        if (!d->packet_pending || d->queue->serial != d->pkt_serial) {
            AVPacket pkt;
            do {
                if (d->queue->nb_packets == 0)
                    SDL_CondSignal(d->empty_queue_cond);
                if (packet_queue_get_or_buffering(ffp, d->queue, &pkt, &d->pkt_serial, &d->finished) < 0)
                    return -1;
                if (pkt.data == flush_pkt.data) {
                    avcodec_flush_buffers(d->avctx);
                    d->finished = 0;
                    d->next_pts = d->start_pts;
                    d->next_pts_tb = d->start_pts_tb;
                }
            } while (pkt.data == flush_pkt.data || d->queue->serial != d->pkt_serial);
            av_packet_unref(&d->pkt);
            d->pkt_temp = d->pkt = pkt;
            d->packet_pending = 1;
        }

        switch (d->avctx->codec_type) {
            case AVMEDIA_TYPE_VIDEO: {
                ret = avcodec_decode_video2(d->avctx, frame, &got_frame, &d->pkt_temp);
                if (got_frame) {
                    ffp->stat.vdps = SDL_SpeedSamplerAdd(&ffp->vdps_sampler, FFP_SHOW_VDPS_AVCODEC, "vdps[avcodec]");
                    if (ffp->decoder_reorder_pts == -1) {
                        frame->pts = av_frame_get_best_effort_timestamp(frame);
                    } else if (!ffp->decoder_reorder_pts) {
                        frame->pts = frame->pkt_dts;
                    }
                }
                }
                break;
            case AVMEDIA_TYPE_AUDIO:
                ret = avcodec_decode_audio4(d->avctx, frame, &got_frame, &d->pkt_temp);
                if (got_frame) {
                    AVRational tb = (AVRational){1, frame->sample_rate};
                    if (frame->pts != AV_NOPTS_VALUE)
                        frame->pts = av_rescale_q(frame->pts, av_codec_get_pkt_timebase(d->avctx), tb);
                    else if (d->next_pts != AV_NOPTS_VALUE)
                        frame->pts = av_rescale_q(d->next_pts, d->next_pts_tb, tb);
                    if (frame->pts != AV_NOPTS_VALUE) {
                        d->next_pts = frame->pts + frame->nb_samples;
                        d->next_pts_tb = tb;
                    }
                }
                break;
            case AVMEDIA_TYPE_SUBTITLE:
                ret = avcodec_decode_subtitle2(d->avctx, sub, &got_frame, &d->pkt_temp);
                break;
            default:
                break;
        }

        if (ret < 0) {
            d->packet_pending = 0;
        } else {
            d->pkt_temp.dts =
            d->pkt_temp.pts = AV_NOPTS_VALUE;
            if (d->pkt_temp.data) {
                if (d->avctx->codec_type != AVMEDIA_TYPE_AUDIO)
                    ret = d->pkt_temp.size;
                d->pkt_temp.data += ret;
                d->pkt_temp.size -= ret;
                if (d->pkt_temp.size <= 0)
                    d->packet_pending = 0;
            } else {
                if (!got_frame) {
                    d->packet_pending = 0;
                    d->finished = d->pkt_serial;
                }
            }
        }
    } while (!got_frame && !d->finished);

    return got_frame;
}

packet_queue_get_or_buffering从解码前的队列中读取一帧的数据,然后调用avcodec_decode_video2。不能进去看了,没完了,总之是读取一帧。往回倒,回到ffplay_video_thread里,下面就是queue_picture,将一帧解码后的图像放入解码后队列。
至此解码算完了。整个过程真是粗略分析啊,对自己也很抱歉,暂时先这样吧。后面有空继续就某个点深入进行。

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/66711.html

相关文章

  • android ijkplayer c分析-prepare过程读取线程(3-解码核心video

    摘要:基本上就是对一个数据帧的描述。我理解的是一个未解码的压缩数据帧。 read_thread这个最关键的读取线程中,逐步跟踪,可以明确stream_component_open---> decoder_start---> video_thread--->ffplay_video_thread。这个调用过程,在解码开始后的异步解码线程中,调用的是ffplay_video_thread。具体可...

    _Suqin 评论0 收藏0
  • android ijkplayer c分析-prepare过程读取线程

    摘要:我们下面先从读取线程入手。无论这个循环前后干了什么,都是要走这一步,读取数据帧。从开始,我理解的是计算出当前数据帧的时间戳后再计算出播放的起始时间到当前时间,然后看这个时间戳是否在此范围内。 ijkplayer现在比较流行,因为工作关系,接触了他,现在做个简单的分析记录吧。我这里直接跳过java层代码,进入c层,因为大多数的工作都是通过jni调用到c层来完成的,java层的内容并不是主...

    MobService 评论0 收藏0
  • android ijkplayer c分析-prepare过程读取线程(2-读取输入源)

    摘要:下面是,读取头信息头信息。猜测网络部分至少在一开始就应当初始化好的,因此在的过程里面找,在中找到了。就先暂时分析到此吧。 这章要简单分析下ijkplayer是如何从文件或网络读取数据源的。还是read_thread函数中的关键点avformat_open_input函数: int avformat_open_input(AVFormatContext **ps, const char ...

    kevin 评论0 收藏0
  • android ijkplayer c分析-初始化(1 javac衔接)

    摘要:初始化的过程上一篇其实并未完全分析完,这回接着来。层的函数中,最后还有的调用,走的是层的。结构体如下的和,以及,其余是状态及的内容。整个过程是个异步的过程,并不阻塞。至于的东西,都是在层创建并填充的。 初始化的过程上一篇其实并未完全分析完,这回接着来。java层的initPlayer函数中,最后还有native_setup的调用,走的是c层的IjkMediaPlayer_native_...

    Olivia 评论0 收藏0

发表评论

0条评论

最新活动
阅读需要支付1元查看
<