Go to the documentation of this file.
51 const char *keyword,
void *
value,
int *lines_written)
56 len = strlen(keyword);
63 if (!strcmp(fmt,
"%d")) {
82 int bitpix, naxis, naxis3 = 1, bzero = 0,
rgb = 0, lines_written = 0, lines_left;
83 int pcount = 0, gcount = 1;
84 float datamax, datamin;
132 memcpy(
buffer,
"SIMPLE = ", 10);
133 memset(
buffer + 10,
' ', 70);
137 memcpy(
buffer,
"XTENSION= 'IMAGE '", 20);
138 memset(
buffer + 20,
' ', 60);
170 memcpy(
buffer,
"CTYPE3 = 'RGB '", 20);
171 memset(
buffer + 20,
' ', 60);
177 memset(
buffer + 3,
' ', 77);
181 lines_left = ((lines_written + 35) / 36) * 36 - lines_written;
198 .p.extensions =
"fits",
Filter the word “frame” indicates either a video frame or a group of audio as stored in an AVFrame structure Format for each input and each output the list of supported formats For video that means pixel format For audio that means channel sample they are references to shared objects When the negotiation mechanism computes the intersection of the formats supported at each end of a all references to both lists are replaced with a reference to the intersection And when a single format is eventually chosen for a link amongst the remaining all references to the list are updated That means that if a filter requires that its input and output have the same format amongst a supported all it has to do is use a reference to the same list of formats query_formats can leave some formats unset and return AVERROR(EAGAIN) to cause the negotiation mechanism toagain later. That can be used by filters with complex requirements to use the format negotiated on one link to set the formats supported on another. Frame references ownership and permissions
This struct describes the properties of an encoded stream.
@ AV_PIX_FMT_GBRP16BE
planar GBR 4:4:4 48bpp, big-endian
#define FF_OFMT_FLAG_ONLY_DEFAULT_CODECS
If this flag is set, then the only permitted audio/video/subtitle codec ids are AVOutputFormat....
@ AV_PIX_FMT_GRAY16BE
Y , 16bpp, big-endian.
@ AV_PIX_FMT_GBRAP
planar GBRA 4:4:4:4 32bpp
@ AV_PIX_FMT_GBRAP16BE
planar GBRA 4:4:4:4 64bpp, big-endian
AVCodecParameters * codecpar
Codec parameters associated with this stream.
void ffio_fill(AVIOContext *s, int b, int64_t count)
@ AV_PIX_FMT_GRAY8
Y , 8bpp.
#define NULL_IF_CONFIG_SMALL(x)
Return NULL if CONFIG_SMALL is true, otherwise the argument without modification.
static const uint8_t header[24]
void avio_write(AVIOContext *s, const unsigned char *buf, int size)
#define FF_OFMT_FLAG_MAX_ONE_OF_EACH
If this flag is set, it indicates that for each codec type whose corresponding default codec (i....
it s the only field you need to keep assuming you have a context There is some magic you don t need to care about around this just let it vf default value
static int write_packet(Muxer *mux, OutputStream *ost, AVPacket *pkt)
the frame and frame reference mechanism is intended to as much as expensive copies of that data while still allowing the filters to produce correct results The data is stored in buffers represented by AVFrame structures Several references can point to the same frame buffer
@ AV_PIX_FMT_GBRP
planar GBR 4:4:4 24bpp
This structure stores compressed data.
static void write_header(FFV1Context *f)