Go to the documentation of this file.
56 for (
i = 0;
i <
s->palette_size / 4;
i++) {
82 const uint8_t *buf = avpkt->
data;
83 int buf_size = avpkt->
size;
100 case MKTAG(
'A',
'A',
'S',
'4'):
104 case MKTAG(
'A',
'A',
'S',
'C'):
108 if (buf_size < stride * avctx->
height)
110 for (
i = avctx->
height - 1;
i >= 0;
i--) {
111 memcpy(
s->frame->data[0] +
i *
s->frame->linesize[0], buf, avctx->
width * psize);
131 memcpy(
s->frame->data[1],
s->palette,
s->palette_size);
Filter the word “frame” indicates either a video frame or a group of audio as stored in an AVFrame structure Format for each input and each output the list of supported formats For video that means pixel format For audio that means channel sample they are references to shared objects When the negotiation mechanism computes the intersection of the formats supported at each end of a all references to both lists are replaced with a reference to the intersection And when a single format is eventually chosen for a link amongst the remaining all references to the list are updated That means that if a filter requires that its input and output have the same format amongst a supported all it has to do is use a reference to the same list of formats query_formats can leave some formats unset and return AVERROR(EAGAIN) to cause the negotiation mechanism toagain later. That can be used by filters with complex requirements to use the format negotiated on one link to set the formats supported on another. Frame references ownership and permissions
static int aasc_decode_frame(AVCodecContext *avctx, AVFrame *rframe, int *got_frame, AVPacket *avpkt)
void av_frame_free(AVFrame **frame)
Free the frame and any dynamically allocated objects in it, e.g.
This structure describes decoded (raw) audio or video data.
@ AV_PIX_FMT_BGR24
packed RGB 8:8:8, 24bpp, BGRBGR...
AVCodec p
The public AVCodec.
AVFrame * av_frame_alloc(void)
Allocate an AVFrame and set its fields to default values.
#define AV_LOG_ERROR
Something went wrong and cannot losslessly be recovered.
#define FF_CODEC_DECODE_CB(func)
int(* init)(AVBSFContext *ctx)
#define CODEC_LONG_NAME(str)
static av_cold int aasc_decode_init(AVCodecContext *avctx)
#define AV_CODEC_CAP_DR1
Codec uses get_buffer() or get_encode_buffer() for allocating buffers and supports custom allocators.
int av_frame_ref(AVFrame *dst, const AVFrame *src)
Set up a new reference to the data described by the source frame.
const FFCodec ff_aasc_decoder
uint32_t palette[AVPALETTE_COUNT]
int bits_per_coded_sample
bits per sample/pixel from the demuxer (needed for huffyuv).
@ AV_PIX_FMT_RGB555LE
packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), little-endian, X=unused/undefined
#define i(width, name, range_min, range_max)
uint8_t * extradata
some codecs need / can use extradata like Huffman tables.
const char * name
Name of the codec implementation.
enum AVPixelFormat pix_fmt
Pixel format, see AV_PIX_FMT_xxx.
@ AV_PIX_FMT_PAL8
8 bits with AV_PIX_FMT_RGB32 palette
int ff_reget_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
Identical in function to ff_get_buffer(), except it reuses the existing buffer if available.
uint64_t_TMPL AV_WL64 unsigned int_TMPL AV_RL32
main external API structure.
static av_cold int aasc_decode_end(AVCodecContext *avctx)
unsigned int codec_tag
fourcc (LSB first, so "ABCD" -> ('D'<<24) + ('C'<<16) + ('B'<<8) + 'A').
int ff_msrle_decode(AVCodecContext *avctx, AVFrame *pic, int depth, GetByteContext *gb)
Decode stream in MS RLE format into frame.
This structure stores compressed data.
int width
picture width / height.
static av_always_inline void bytestream2_init(GetByteContext *g, const uint8_t *buf, int buf_size)
#define AVERROR_INVALIDDATA
Invalid data found when processing input.
#define MKTAG(a, b, c, d)