Go to the documentation of this file.
32 unsigned int mb_h = (
frame->height + 15) / 16;
33 unsigned int mb_w = (
frame->width + 15) / 16;
34 unsigned int nb_mb = mb_h * mb_w;
35 unsigned int block_idx;
55 *qscale_type = par->
type;
62 for (block_idx = 0; block_idx < nb_mb; block_idx++) {
64 (*table)[block_idx] = par->
qp +
b->delta_qp;
int32_t qp
Base quantisation parameter for the frame.
Filter the word “frame” indicates either a video frame or a group of audio as stored in an AVFrame structure Format for each input and each output the list of supported formats For video that means pixel format For audio that means channel sample they are references to shared objects When the negotiation mechanism computes the intersection of the formats supported at each end of a all references to both lists are replaced with a reference to the intersection And when a single format is eventually chosen for a link amongst the remaining all references to the list are updated That means that if a filter requires that its input and output have the same format amongst a supported all it has to do is use a reference to the same list of formats query_formats can leave some formats unset and return AVERROR(EAGAIN) to cause the negotiation mechanism toagain later. That can be used by filters with complex requirements to use the format negotiated on one link to set the formats supported on another. Frame references ownership and permissions
AVFrameSideData * av_frame_get_side_data(const AVFrame *frame, enum AVFrameSideDataType type)
This structure describes decoded (raw) audio or video data.
static const uint16_t table[]
@ AV_VIDEO_ENC_PARAMS_MPEG2
Video encoding parameters for a given frame.
enum AVVideoEncParamsType type
Type of the parameters (the codec they are used with).
unsigned int nb_blocks
Number of blocks in the array.
Data structure for storing block-level encoding information.
these buffered frames must be flushed immediately if a new input produces new the filter must not call request_frame to get more It must just process the frame or queue it The task of requesting more frames is left to the filter s request_frame method or the application If a filter has several the filter must be ready for frames arriving randomly on any input any filter with several inputs will most likely require some kind of queuing mechanism It is perfectly acceptable to have a limited queue and to drop frames when the inputs are too unbalanced request_frame For filters that do not use the this method is called when a frame is wanted on an output For a it should directly call filter_frame on the corresponding output For a if there are queued frames already one of these frames should be pushed If the filter should request a frame on one of its repeatedly until at least one frame has been pushed Return or at least make progress towards producing a frame
int ff_qp_table_extract(AVFrame *frame, int8_t **table, int *table_w, int *table_h, enum AVVideoEncParamsType *qscale_type)
Extract a libpostproc-compatible QP table - an 8-bit QP value per 16x16 macroblock,...
@ AV_FRAME_DATA_VIDEO_ENC_PARAMS
Encoding parameters for a video frame, as described by AVVideoEncParams.
Structure to hold side data for an AVFrame.
static av_always_inline AVVideoBlockParams * av_video_enc_params_block(AVVideoEncParams *par, unsigned int idx)