1\input texinfo @c -*- texinfo -*-
2
3@settitle ffmpeg Documentation
4@titlepage
5@center @titlefont{ffmpeg Documentation}
6@end titlepage
7
8@top
9
10@contents
11
12@chapter Synopsis
13
14ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_file}@} ... @{[@var{output_file_options}] @file{output_file}@} ...
15
16@chapter Description
17@c man begin DESCRIPTION
18
19@command{ffmpeg} is a very fast video and audio converter that can also grab from
20a live audio/video source. It can also convert between arbitrary sample
21rates and resize video on the fly with a high quality polyphase filter.
22
23@command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
24files, pipes, network streams, grabbing devices, etc.), specified by the
25@code{-i} option, and writes to an arbitrary number of output "files", which are
26specified by a plain output filename. Anything found on the command line which
27cannot be interpreted as an option is considered to be an output filename.
28
29Each input or output file can, in principle, contain any number of streams of
30different types (video/audio/subtitle/attachment/data). The allowed number and/or
31types of streams may be limited by the container format. Selecting which
32streams from which inputs will go into which output is either done automatically
33or with the @code{-map} option (see the Stream selection chapter).
34
35To refer to input files in options, you must use their indices (0-based). E.g.
36the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
37within a file are referred to by their indices. E.g. @code{2:3} refers to the
38fourth stream in the third input file. Also see the Stream specifiers chapter.
39
40As a general rule, options are applied to the next specified
41file. Therefore, order is important, and you can have the same
42option on the command line multiple times. Each occurrence is
43then applied to the next input or output file.
44Exceptions from this rule are the global options (e.g. verbosity level),
45which should be specified first.
46
47Do not mix input and output files -- first specify all input files, then all
48output files. Also do not mix options which belong to different files. All
49options apply ONLY to the next input or output file and are reset between files.
50
51@itemize
52@item
53To set the video bitrate of the output file to 64 kbit/s:
54@example
55ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
56@end example
57
58@item
59To force the frame rate of the output file to 24 fps:
60@example
61ffmpeg -i input.avi -r 24 output.avi
62@end example
63
64@item
65To force the frame rate of the input file (valid for raw formats only)
66to 1 fps and the frame rate of the output file to 24 fps:
67@example
68ffmpeg -r 1 -i input.m2v -r 24 output.avi
69@end example
70@end itemize
71
72The format option may be needed for raw input files.
73
74@c man end DESCRIPTION
75
76@chapter Detailed description
77@c man begin DETAILED DESCRIPTION
78
79The transcoding process in @command{ffmpeg} for each output can be described by
80the following diagram:
81
82@example
83 _______              ______________
84|       |            |              |
85| input |  demuxer   | encoded data |   decoder
86| file  | ---------> | packets      | -----+
87|_______|            |______________|      |
88                                           v
89                                       _________
90                                      |         |
91                                      | decoded |
92                                      | frames  |
93                                      |_________|
94 ________             ______________       |
95|        |           |              |      |
96| output | <-------- | encoded data | <----+
97| file   |   muxer   | packets      |   encoder
98|________|           |______________|
99
100
101@end example
102
103@command{ffmpeg} calls the libavformat library (containing demuxers) to read
104input files and get packets containing encoded data from them. When there are
105multiple input files, @command{ffmpeg} tries to keep them synchronized by
106tracking lowest timestamp on any active input stream.
107
108Encoded packets are then passed to the decoder (unless streamcopy is selected
109for the stream, see further for a description). The decoder produces
110uncompressed frames (raw video/PCM audio/...) which can be processed further by
111filtering (see next section). After filtering, the frames are passed to the
112encoder, which encodes them and outputs encoded packets. Finally those are
113passed to the muxer, which writes the encoded packets to the output file.
114
115@section Filtering
116Before encoding, @command{ffmpeg} can process raw audio and video frames using
117filters from the libavfilter library. Several chained filters form a filter
118graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
119simple and complex.
120
121@subsection Simple filtergraphs
122Simple filtergraphs are those that have exactly one input and output, both of
123the same type. In the above diagram they can be represented by simply inserting
124an additional step between decoding and encoding:
125
126@example
127 _________                        ______________
128|         |                      |              |
129| decoded |                      | encoded data |
130| frames  |\                   _ | packets      |
131|_________| \                  /||______________|
132             \   __________   /
133  simple     _\||          | /  encoder
134  filtergraph   | filtered |/
135                | frames   |
136                |__________|
137
138@end example
139
140Simple filtergraphs are configured with the per-stream @option{-filter} option
141(with @option{-vf} and @option{-af} aliases for video and audio respectively).
142A simple filtergraph for video can look for example like this:
143
144@example
145 _______        _____________        _______        ________
146|       |      |             |      |       |      |        |
147| input | ---> | deinterlace | ---> | scale | ---> | output |
148|_______|      |_____________|      |_______|      |________|
149
150@end example
151
152Note that some filters change frame properties but not frame contents. E.g. the
153@code{fps} filter in the example above changes number of frames, but does not
154touch the frame contents. Another example is the @code{setpts} filter, which
155only sets timestamps and otherwise passes the frames unchanged.
156
157@subsection Complex filtergraphs
158Complex filtergraphs are those which cannot be described as simply a linear
159processing chain applied to one stream. This is the case, for example, when the graph has
160more than one input and/or output, or when output stream type is different from
161input. They can be represented with the following diagram:
162
163@example
164 _________
165|         |
166| input 0 |\                    __________
167|_________| \                  |          |
168             \   _________    /| output 0 |
169              \ |         |  / |__________|
170 _________     \| complex | /
171|         |     |         |/
172| input 1 |---->| filter  |\
173|_________|     |         | \   __________
174               /| graph   |  \ |          |
175              / |         |   \| output 1 |
176 _________   /  |_________|    |__________|
177|         | /
178| input 2 |/
179|_________|
180
181@end example
182
183Complex filtergraphs are configured with the @option{-filter_complex} option.
184Note that this option is global, since a complex filtergraph, by its nature,
185cannot be unambiguously associated with a single stream or file.
186
187The @option{-lavfi} option is equivalent to @option{-filter_complex}.
188
189A trivial example of a complex filtergraph is the @code{overlay} filter, which
190has two video inputs and one video output, containing one video overlaid on top
191of the other. Its audio counterpart is the @code{amix} filter.
192
193@section Stream copy
194Stream copy is a mode selected by supplying the @code{copy} parameter to the
195@option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
196step for the specified stream, so it does only demuxing and muxing. It is useful
197for changing the container format or modifying container-level metadata. The
198diagram above will, in this case, simplify to this:
199
200@example
201 _______              ______________            ________
202|       |            |              |          |        |
203| input |  demuxer   | encoded data |  muxer   | output |
204| file  | ---------> | packets      | -------> | file   |
205|_______|            |______________|          |________|
206
207@end example
208
209Since there is no decoding or encoding, it is very fast and there is no quality
210loss. However, it might not work in some cases because of many factors. Applying
211filters is obviously also impossible, since filters work on uncompressed data.
212
213@c man end DETAILED DESCRIPTION
214
215@chapter Stream selection
216@c man begin STREAM SELECTION
217
218By default, @command{ffmpeg} includes only one stream of each type (video, audio, subtitle)
219present in the input files and adds them to each output file.  It picks the
220"best" of each based upon the following criteria: for video, it is the stream
221with the highest resolution, for audio, it is the stream with the most channels, for
222subtitles, it is the first subtitle stream. In the case where several streams of
223the same type rate equally, the stream with the lowest index is chosen.
224
225You can disable some of those defaults by using the @code{-vn/-an/-sn} options. For
226full manual control, use the @code{-map} option, which disables the defaults just
227described.
228
229@c man end STREAM SELECTION
230
231@chapter Options
232@c man begin OPTIONS
233
234@include fftools-common-opts.texi
235
236@section Main options
237
238@table @option
239
240@item -f @var{fmt} (@emph{input/output})
241Force input or output file format. The format is normally auto detected for input
242files and guessed from the file extension for output files, so this option is not
243needed in most cases.
244
245@item -i @var{filename} (@emph{input})
246input file name
247
248@item -y (@emph{global})
249Overwrite output files without asking.
250
251@item -n (@emph{global})
252Do not overwrite output files, and exit immediately if a specified
253output file already exists.
254
255@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
256@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
257Select an encoder (when used before an output file) or a decoder (when used
258before an input file) for one or more streams. @var{codec} is the name of a
259decoder/encoder or a special value @code{copy} (output only) to indicate that
260the stream is not to be re-encoded.
261
262For example
263@example
264ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
265@end example
266encodes all video streams with libx264 and copies all audio streams.
267
268For each stream, the last matching @code{c} option is applied, so
269@example
270ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
271@end example
272will copy all the streams except the second video, which will be encoded with
273libx264, and the 138th audio, which will be encoded with libvorbis.
274
275@item -t @var{duration} (@emph{input/output})
276When used as an input option (before @code{-i}), limit the @var{duration} of
277data read from the input file.
278
279When used as an output option (before an output filename), stop writing the
280output after its duration reaches @var{duration}.
281
282@var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
283
284-to and -t are mutually exclusive and -t has priority.
285
286@item -to @var{position} (@emph{output})
287Stop writing the output at @var{position}.
288@var{position} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
289
290-to and -t are mutually exclusive and -t has priority.
291
292@item -fs @var{limit_size} (@emph{output})
293Set the file size limit, expressed in bytes.
294
295@item -ss @var{position} (@emph{input/output})
296When used as an input option (before @code{-i}), seeks in this input file to
297@var{position}. Note the in most formats it is not possible to seek exactly, so
298@command{ffmpeg} will seek to the closest seek point before @var{position}.
299When transcoding and @option{-accurate_seek} is enabled (the default), this
300extra segment between the seek point and @var{position} will be decoded and
301discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
302will be preserved.
303
304When used as an output option (before an output filename), decodes but discards
305input until the timestamps reach @var{position}.
306
307@var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
308
309@item -itsoffset @var{offset} (@emph{input})
310Set the input time offset.
311
312@var{offset} must be a time duration specification,
313see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
314
315The offset is added to the timestamps of the input files. Specifying
316a positive offset means that the corresponding streams are delayed by
317the time duration specified in @var{offset}.
318
319@item -timestamp @var{date} (@emph{output})
320Set the recording timestamp in the container.
321
322@var{date} must be a time duration specification,
323see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
324
325@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
326Set a metadata key/value pair.
327
328An optional @var{metadata_specifier} may be given to set metadata
329on streams or chapters. See @code{-map_metadata} documentation for
330details.
331
332This option overrides metadata set with @code{-map_metadata}. It is
333also possible to delete metadata by using an empty value.
334
335For example, for setting the title in the output file:
336@example
337ffmpeg -i in.avi -metadata title="my title" out.flv
338@end example
339
340To set the language of the first audio stream:
341@example
342ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
343@end example
344
345@item -target @var{type} (@emph{output})
346Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
347@code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
348@code{film-} to use the corresponding standard. All the format options
349(bitrate, codecs, buffer sizes) are then set automatically. You can just type:
350
351@example
352ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
353@end example
354
355Nevertheless you can specify additional options as long as you know
356they do not conflict with the standard, as in:
357
358@example
359ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
360@end example
361
362@item -dframes @var{number} (@emph{output})
363Set the number of data frames to record. This is an alias for @code{-frames:d}.
364
365@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
366Stop writing to the stream after @var{framecount} frames.
367
368@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
369@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
370Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
371codec-dependent.
372If @var{qscale} is used without a @var{stream_specifier} then it applies only
373to the video stream, this is to maintain compatibility with previous behavior
374and as specifying the same codec specific value to 2 different codecs that is
375audio and video generally is not what is intended when no stream_specifier is
376used.
377
378@anchor{filter_option}
379@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
380Create the filtergraph specified by @var{filtergraph} and use it to
381filter the stream.
382
383@var{filtergraph} is a description of the filtergraph to apply to
384the stream, and must have a single input and a single output of the
385same type of the stream. In the filtergraph, the input is associated
386to the label @code{in}, and the output to the label @code{out}. See
387the ffmpeg-filters manual for more information about the filtergraph
388syntax.
389
390See the @ref{filter_complex_option,,-filter_complex option} if you
391want to create filtergraphs with multiple inputs and/or outputs.
392
393@item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
394This option is similar to @option{-filter}, the only difference is that its
395argument is the name of the file from which a filtergraph description is to be
396read.
397
398@item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
399Specify the preset for matching stream(s).
400
401@item -stats (@emph{global})
402Print encoding progress/statistics. It is on by default, to explicitly
403disable it you need to specify @code{-nostats}.
404
405@item -progress @var{url} (@emph{global})
406Send program-friendly progress information to @var{url}.
407
408Progress information is written approximately every second and at the end of
409the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
410consists of only alphanumeric characters. The last key of a sequence of
411progress information is always "progress".
412
413@item -stdin
414Enable interaction on standard input. On by default unless standard input is
415used as an input. To explicitly disable interaction you need to specify
416@code{-nostdin}.
417
418Disabling interaction on standard input is useful, for example, if
419ffmpeg is in the background process group. Roughly the same result can
420be achieved with @code{ffmpeg ... < /dev/null} but it requires a
421shell.
422
423@item -debug_ts (@emph{global})
424Print timestamp information. It is off by default. This option is
425mostly useful for testing and debugging purposes, and the output
426format may change from one version to another, so it should not be
427employed by portable scripts.
428
429See also the option @code{-fdebug ts}.
430
431@item -attach @var{filename} (@emph{output})
432Add an attachment to the output file. This is supported by a few formats
433like Matroska for e.g. fonts used in rendering subtitles. Attachments
434are implemented as a specific type of stream, so this option will add
435a new stream to the file. It is then possible to use per-stream options
436on this stream in the usual way. Attachment streams created with this
437option will be created after all the other streams (i.e. those created
438with @code{-map} or automatic mappings).
439
440Note that for Matroska you also have to set the mimetype metadata tag:
441@example
442ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
443@end example
444(assuming that the attachment stream will be third in the output file).
445
446@item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
447Extract the matching attachment stream into a file named @var{filename}. If
448@var{filename} is empty, then the value of the @code{filename} metadata tag
449will be used.
450
451E.g. to extract the first attachment to a file named 'out.ttf':
452@example
453ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
454@end example
455To extract all attachments to files determined by the @code{filename} tag:
456@example
457ffmpeg -dump_attachment:t "" -i INPUT
458@end example
459
460Technical note -- attachments are implemented as codec extradata, so this
461option can actually be used to extract extradata from any stream, not just
462attachments.
463
464@end table
465
466@section Video Options
467
468@table @option
469@item -vframes @var{number} (@emph{output})
470Set the number of video frames to record. This is an alias for @code{-frames:v}.
471@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
472Set frame rate (Hz value, fraction or abbreviation).
473
474As an input option, ignore any timestamps stored in the file and instead
475generate timestamps assuming constant frame rate @var{fps}.
476
477As an output option, duplicate or drop input frames to achieve constant output
478frame rate @var{fps}.
479
480@item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
481Set frame size.
482
483As an input option, this is a shortcut for the @option{video_size} private
484option, recognized by some demuxers for which the frame size is either not
485stored in the file or is configurable -- e.g. raw video or video grabbers.
486
487As an output option, this inserts the @code{scale} video filter to the
488@emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
489directly to insert it at the beginning or some other place.
490
491The format is @samp{wxh} (default - same as source).
492
493@item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
494Set the video display aspect ratio specified by @var{aspect}.
495
496@var{aspect} can be a floating point number string, or a string of the
497form @var{num}:@var{den}, where @var{num} and @var{den} are the
498numerator and denominator of the aspect ratio. For example "4:3",
499"16:9", "1.3333", and "1.7777" are valid argument values.
500
501If used together with @option{-vcodec copy}, it will affect the aspect ratio
502stored at container level, but not the aspect ratio stored in encoded
503frames, if it exists.
504
505@item -vn (@emph{output})
506Disable video recording.
507
508@item -vcodec @var{codec} (@emph{output})
509Set the video codec. This is an alias for @code{-codec:v}.
510
511@item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
512Select the pass number (1 or 2). It is used to do two-pass
513video encoding. The statistics of the video are recorded in the first
514pass into a log file (see also the option -passlogfile),
515and in the second pass that log file is used to generate the video
516at the exact requested bitrate.
517On pass 1, you may just deactivate audio and set output to null,
518examples for Windows and Unix:
519@example
520ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
521ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
522@end example
523
524@item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
525Set two-pass log file name prefix to @var{prefix}, the default file name
526prefix is ``ffmpeg2pass''. The complete file name will be
527@file{PREFIX-N.log}, where N is a number specific to the output
528stream
529
530@item -vf @var{filtergraph} (@emph{output})
531Create the filtergraph specified by @var{filtergraph} and use it to
532filter the stream.
533
534This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
535@end table
536
537@section Advanced Video options
538
539@table @option
540@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
541Set pixel format. Use @code{-pix_fmts} to show all the supported
542pixel formats.
543If the selected pixel format can not be selected, ffmpeg will print a
544warning and select the best pixel format supported by the encoder.
545If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
546if the requested pixel format can not be selected, and automatic conversions
547inside filtergraphs are disabled.
548If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
549as the input (or graph output) and automatic conversions are disabled.
550
551@item -sws_flags @var{flags} (@emph{input/output})
552Set SwScaler flags.
553@item -vdt @var{n}
554Discard threshold.
555
556@item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
557Rate control override for specific intervals, formatted as "int,int,int"
558list separated with slashes. Two first values are the beginning and
559end frame numbers, last one is quantizer to use if positive, or quality
560factor if negative.
561
562@item -ilme
563Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
564Use this option if your input file is interlaced and you want
565to keep the interlaced format for minimum losses.
566The alternative is to deinterlace the input stream with
567@option{-deinterlace}, but deinterlacing introduces losses.
568@item -psnr
569Calculate PSNR of compressed frames.
570@item -vstats
571Dump video coding statistics to @file{vstats_HHMMSS.log}.
572@item -vstats_file @var{file}
573Dump video coding statistics to @var{file}.
574@item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
575top=1/bottom=0/auto=-1 field first
576@item -dc @var{precision}
577Intra_dc_precision.
578@item -vtag @var{fourcc/tag} (@emph{output})
579Force video tag/fourcc. This is an alias for @code{-tag:v}.
580@item -qphist (@emph{global})
581Show QP histogram
582@item -vbsf @var{bitstream_filter}
583Deprecated see -bsf
584
585@item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
586@item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
587Force key frames at the specified timestamps, more precisely at the first
588frames after each specified time.
589
590If the argument is prefixed with @code{expr:}, the string @var{expr}
591is interpreted like an expression and is evaluated for each frame. A
592key frame is forced in case the evaluation is non-zero.
593
594If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
595the time of the beginning of all chapters in the file, shifted by
596@var{delta}, expressed as a time in seconds.
597This option can be useful to ensure that a seek point is present at a
598chapter mark or any other designated place in the output file.
599
600For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
601before the beginning of every chapter:
602@example
603-force_key_frames 0:05:00,chapters-0.1
604@end example
605
606The expression in @var{expr} can contain the following constants:
607@table @option
608@item n
609the number of current processed frame, starting from 0
610@item n_forced
611the number of forced frames
612@item prev_forced_n
613the number of the previous forced frame, it is @code{NAN} when no
614keyframe was forced yet
615@item prev_forced_t
616the time of the previous forced frame, it is @code{NAN} when no
617keyframe was forced yet
618@item t
619the time of the current processed frame
620@end table
621
622For example to force a key frame every 5 seconds, you can specify:
623@example
624-force_key_frames expr:gte(t,n_forced*5)
625@end example
626
627To force a key frame 5 seconds after the time of the last forced one,
628starting from second 13:
629@example
630-force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
631@end example
632
633Note that forcing too many keyframes is very harmful for the lookahead
634algorithms of certain encoders: using fixed-GOP options or similar
635would be more efficient.
636
637@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
638When doing stream copy, copy also non-key frames found at the
639beginning.
640
641@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
642Use hardware acceleration to decode the matching stream(s). The allowed values
643of @var{hwaccel} are:
644@table @option
645@item none
646Do not use any hardware acceleration (the default).
647
648@item auto
649Automatically select the hardware acceleration method.
650
651@item vda
652Use Apple VDA hardware acceleration.
653
654@item vdpau
655Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
656
657@item dxva2
658Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
659@end table
660
661This option has no effect if the selected hwaccel is not available or not
662supported by the chosen decoder.
663
664Note that most acceleration methods are intended for playback and will not be
665faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
666will usually need to copy the decoded frames from the GPU memory into the system
667memory, resulting in further performance loss. This option is thus mainly
668useful for testing.
669
670@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
671Select a device to use for hardware acceleration.
672
673This option only makes sense when the @option{-hwaccel} option is also
674specified. Its exact meaning depends on the specific hardware acceleration
675method chosen.
676
677@table @option
678@item vdpau
679For VDPAU, this option specifies the X11 display/screen to use. If this option
680is not specified, the value of the @var{DISPLAY} environment variable is used
681
682@item dxva2
683For DXVA2, this option should contain the number of the display adapter to use.
684If this option is not specified, the default adapter is used.
685@end table
686@end table
687
688@section Audio Options
689
690@table @option
691@item -aframes @var{number} (@emph{output})
692Set the number of audio frames to record. This is an alias for @code{-frames:a}.
693@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
694Set the audio sampling frequency. For output streams it is set by
695default to the frequency of the corresponding input stream. For input
696streams this option only makes sense for audio grabbing devices and raw
697demuxers and is mapped to the corresponding demuxer options.
698@item -aq @var{q} (@emph{output})
699Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
700@item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
701Set the number of audio channels. For output streams it is set by
702default to the number of input audio channels. For input streams
703this option only makes sense for audio grabbing devices and raw demuxers
704and is mapped to the corresponding demuxer options.
705@item -an (@emph{output})
706Disable audio recording.
707@item -acodec @var{codec} (@emph{input/output})
708Set the audio codec. This is an alias for @code{-codec:a}.
709@item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
710Set the audio sample format. Use @code{-sample_fmts} to get a list
711of supported sample formats.
712
713@item -af @var{filtergraph} (@emph{output})
714Create the filtergraph specified by @var{filtergraph} and use it to
715filter the stream.
716
717This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
718@end table
719
720@section Advanced Audio options
721
722@table @option
723@item -atag @var{fourcc/tag} (@emph{output})
724Force audio tag/fourcc. This is an alias for @code{-tag:a}.
725@item -absf @var{bitstream_filter}
726Deprecated, see -bsf
727@item -guess_layout_max @var{channels} (@emph{input,per-stream})
728If some input channel layout is not known, try to guess only if it
729corresponds to at most the specified number of channels. For example, 2
730tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
731stereo but not 6 channels as 5.1. The default is to always try to guess. Use
7320 to disable all guessing.
733@end table
734
735@section Subtitle options
736
737@table @option
738@item -scodec @var{codec} (@emph{input/output})
739Set the subtitle codec. This is an alias for @code{-codec:s}.
740@item -sn (@emph{output})
741Disable subtitle recording.
742@item -sbsf @var{bitstream_filter}
743Deprecated, see -bsf
744@end table
745
746@section Advanced Subtitle options
747
748@table @option
749
750@item -fix_sub_duration
751Fix subtitles durations. For each subtitle, wait for the next packet in the
752same stream and adjust the duration of the first to avoid overlap. This is
753necessary with some subtitles codecs, especially DVB subtitles, because the
754duration in the original packet is only a rough estimate and the end is
755actually marked by an empty subtitle frame. Failing to use this option when
756necessary can result in exaggerated durations or muxing failures due to
757non-monotonic timestamps.
758
759Note that this option will delay the output of all data until the next
760subtitle packet is decoded: it may increase memory consumption and latency a
761lot.
762
763@item -canvas_size @var{size}
764Set the size of the canvas used to render subtitles.
765
766@end table
767
768@section Advanced options
769
770@table @option
771@item -map [-]@var{input_file_id}[:@var{stream_specifier}][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
772
773Designate one or more input streams as a source for the output file. Each input
774stream is identified by the input file index @var{input_file_id} and
775the input stream index @var{input_stream_id} within the input
776file. Both indices start at 0. If specified,
777@var{sync_file_id}:@var{stream_specifier} sets which input stream
778is used as a presentation sync reference.
779
780The first @code{-map} option on the command line specifies the
781source for output stream 0, the second @code{-map} option specifies
782the source for output stream 1, etc.
783
784A @code{-} character before the stream identifier creates a "negative" mapping.
785It disables matching streams from already created mappings.
786
787An alternative @var{[linklabel]} form will map outputs from complex filter
788graphs (see the @option{-filter_complex} option) to the output file.
789@var{linklabel} must correspond to a defined output link label in the graph.
790
791For example, to map ALL streams from the first input file to output
792@example
793ffmpeg -i INPUT -map 0 output
794@end example
795
796For example, if you have two audio streams in the first input file,
797these streams are identified by "0:0" and "0:1". You can use
798@code{-map} to select which streams to place in an output file. For
799example:
800@example
801ffmpeg -i INPUT -map 0:1 out.wav
802@end example
803will map the input stream in @file{INPUT} identified by "0:1" to
804the (single) output stream in @file{out.wav}.
805
806For example, to select the stream with index 2 from input file
807@file{a.mov} (specified by the identifier "0:2"), and stream with
808index 6 from input @file{b.mov} (specified by the identifier "1:6"),
809and copy them to the output file @file{out.mov}:
810@example
811ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
812@end example
813
814To select all video and the third audio stream from an input file:
815@example
816ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
817@end example
818
819To map all the streams except the second audio, use negative mappings
820@example
821ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
822@end example
823
824Note that using this option disables the default mappings for this output file.
825
826@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
827Map an audio channel from a given input to an output. If
828@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
829be mapped on all the audio streams.
830
831Using "-1" instead of
832@var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
833channel.
834
835For example, assuming @var{INPUT} is a stereo audio file, you can switch the
836two audio channels with the following command:
837@example
838ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
839@end example
840
841If you want to mute the first channel and keep the second:
842@example
843ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
844@end example
845
846The order of the "-map_channel" option specifies the order of the channels in
847the output stream. The output channel layout is guessed from the number of
848channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
849in combination of "-map_channel" makes the channel gain levels to be updated if
850input and output channel layouts don't match (for instance two "-map_channel"
851options and "-ac 6").
852
853You can also extract each channel of an input to specific outputs; the following
854command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
855to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
856@example
857ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
858@end example
859
860The following example splits the channels of a stereo input into two separate
861streams, which are put into the same output file:
862@example
863ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
864@end example
865
866Note that currently each output stream can only contain channels from a single
867input stream; you can't for example use "-map_channel" to pick multiple input
868audio channels contained in different streams (from the same or different files)
869and merge them into a single output stream. It is therefore not currently
870possible, for example, to turn two separate mono streams into a single stereo
871stream. However splitting a stereo stream into two single channel mono streams
872is possible.
873
874If you need this feature, a possible workaround is to use the @emph{amerge}
875filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
876mono audio streams into one single stereo channel audio stream (and keep the
877video stream), you can use the following command:
878@example
879ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
880@end example
881
882@item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
883Set metadata information of the next output file from @var{infile}. Note that
884those are file indices (zero-based), not filenames.
885Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
886A metadata specifier can have the following forms:
887@table @option
888@item @var{g}
889global metadata, i.e. metadata that applies to the whole file
890
891@item @var{s}[:@var{stream_spec}]
892per-stream metadata. @var{stream_spec} is a stream specifier as described
893in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
894matching stream is copied from. In an output metadata specifier, all matching
895streams are copied to.
896
897@item @var{c}:@var{chapter_index}
898per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
899
900@item @var{p}:@var{program_index}
901per-program metadata. @var{program_index} is the zero-based program index.
902@end table
903If metadata specifier is omitted, it defaults to global.
904
905By default, global metadata is copied from the first input file,
906per-stream and per-chapter metadata is copied along with streams/chapters. These
907default mappings are disabled by creating any mapping of the relevant type. A negative
908file index can be used to create a dummy mapping that just disables automatic copying.
909
910For example to copy metadata from the first stream of the input file to global metadata
911of the output file:
912@example
913ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
914@end example
915
916To do the reverse, i.e. copy global metadata to all audio streams:
917@example
918ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
919@end example
920Note that simple @code{0} would work as well in this example, since global
921metadata is assumed by default.
922
923@item -map_chapters @var{input_file_index} (@emph{output})
924Copy chapters from input file with index @var{input_file_index} to the next
925output file. If no chapter mapping is specified, then chapters are copied from
926the first input file with at least one chapter. Use a negative file index to
927disable any chapter copying.
928
929@item -benchmark (@emph{global})
930Show benchmarking information at the end of an encode.
931Shows CPU time used and maximum memory consumption.
932Maximum memory consumption is not supported on all systems,
933it will usually display as 0 if not supported.
934@item -benchmark_all (@emph{global})
935Show benchmarking information during the encode.
936Shows CPU time used in various steps (audio/video encode/decode).
937@item -timelimit @var{duration} (@emph{global})
938Exit after ffmpeg has been running for @var{duration} seconds.
939@item -dump (@emph{global})
940Dump each input packet to stderr.
941@item -hex (@emph{global})
942When dumping packets, also dump the payload.
943@item -re (@emph{input})
944Read input at native frame rate. Mainly used to simulate a grab device.
945or live input stream (e.g. when reading from a file). Should not be used
946with actual grab devices or live input streams (where it can cause packet
947loss).
948By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
949This option will slow down the reading of the input(s) to the native frame rate
950of the input(s). It is useful for real-time output (e.g. live streaming).
951@item -loop_input
952Loop over the input stream. Currently it works only for image
953streams. This option is used for automatic FFserver testing.
954This option is deprecated, use -loop 1.
955@item -loop_output @var{number_of_times}
956Repeatedly loop output for formats that support looping such as animated GIF
957(0 will loop the output infinitely).
958This option is deprecated, use -loop.
959@item -vsync @var{parameter}
960Video sync method.
961For compatibility reasons old values can be specified as numbers.
962Newly added values will have to be specified as strings always.
963
964@table @option
965@item 0, passthrough
966Each frame is passed with its timestamp from the demuxer to the muxer.
967@item 1, cfr
968Frames will be duplicated and dropped to achieve exactly the requested
969constant frame rate.
970@item 2, vfr
971Frames are passed through with their timestamp or dropped so as to
972prevent 2 frames from having the same timestamp.
973@item drop
974As passthrough but destroys all timestamps, making the muxer generate
975fresh timestamps based on frame-rate.
976@item -1, auto
977Chooses between 1 and 2 depending on muxer capabilities. This is the
978default method.
979@end table
980
981Note that the timestamps may be further modified by the muxer, after this.
982For example, in the case that the format option @option{avoid_negative_ts}
983is enabled.
984
985With -map you can select from which stream the timestamps should be
986taken. You can leave either video or audio unchanged and sync the
987remaining stream(s) to the unchanged one.
988
989@item -async @var{samples_per_second}
990Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
991the parameter is the maximum samples per second by which the audio is changed.
992-async 1 is a special case where only the start of the audio stream is corrected
993without any later correction.
994
995Note that the timestamps may be further modified by the muxer, after this.
996For example, in the case that the format option @option{avoid_negative_ts}
997is enabled.
998
999This option has been deprecated. Use the @code{aresample} audio filter instead.
1000
1001@item -copyts
1002Do not process input timestamps, but keep their values without trying
1003to sanitize them. In particular, do not remove the initial start time
1004offset value.
1005
1006Note that, depending on the @option{vsync} option or on specific muxer
1007processing (e.g. in case the format option @option{avoid_negative_ts}
1008is enabled) the output timestamps may mismatch with the input
1009timestamps even when this option is selected.
1010
1011@item -copytb @var{mode}
1012Specify how to set the encoder timebase when stream copying.  @var{mode} is an
1013integer numeric value, and can assume one of the following values:
1014
1015@table @option
1016@item 1
1017Use the demuxer timebase.
1018
1019The time base is copied to the output encoder from the corresponding input
1020demuxer. This is sometimes required to avoid non monotonically increasing
1021timestamps when copying video streams with variable frame rate.
1022
1023@item 0
1024Use the decoder timebase.
1025
1026The time base is copied to the output encoder from the corresponding input
1027decoder.
1028
1029@item -1
1030Try to make the choice automatically, in order to generate a sane output.
1031@end table
1032
1033Default value is -1.
1034
1035@item -shortest (@emph{output})
1036Finish encoding when the shortest input stream ends.
1037@item -dts_delta_threshold
1038Timestamp discontinuity delta threshold.
1039@item -muxdelay @var{seconds} (@emph{input})
1040Set the maximum demux-decode delay.
1041@item -muxpreload @var{seconds} (@emph{input})
1042Set the initial demux-decode delay.
1043@item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
1044Assign a new stream-id value to an output stream. This option should be
1045specified prior to the output filename to which it applies.
1046For the situation where multiple output files exist, a streamid
1047may be reassigned to a different value.
1048
1049For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
1050an output mpegts file:
1051@example
1052ffmpeg -i infile -streamid 0:33 -streamid 1:36 out.ts
1053@end example
1054
1055@item -bsf[:@var{stream_specifier}] @var{bitstream_filters} (@emph{output,per-stream})
1056Set bitstream filters for matching streams. @var{bitstream_filters} is
1057a comma-separated list of bitstream filters. Use the @code{-bsfs} option
1058to get the list of bitstream filters.
1059@example
1060ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
1061@end example
1062@example
1063ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
1064@end example
1065
1066@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
1067Force a tag/fourcc for matching streams.
1068
1069@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
1070Specify Timecode for writing. @var{SEP} is ':' for non drop timecode and ';'
1071(or '.') for drop.
1072@example
1073ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
1074@end example
1075
1076@anchor{filter_complex_option}
1077@item -filter_complex @var{filtergraph} (@emph{global})
1078Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1079outputs. For simple graphs -- those with one input and one output of the same
1080type -- see the @option{-filter} options. @var{filtergraph} is a description of
1081the filtergraph, as described in the ``Filtergraph syntax'' section of the
1082ffmpeg-filters manual.
1083
1084Input link labels must refer to input streams using the
1085@code{[file_index:stream_specifier]} syntax (i.e. the same as @option{-map}
1086uses). If @var{stream_specifier} matches multiple streams, the first one will be
1087used. An unlabeled input will be connected to the first unused input stream of
1088the matching type.
1089
1090Output link labels are referred to with @option{-map}. Unlabeled outputs are
1091added to the first output file.
1092
1093Note that with this option it is possible to use only lavfi sources without
1094normal input files.
1095
1096For example, to overlay an image over video
1097@example
1098ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map
1099'[out]' out.mkv
1100@end example
1101Here @code{[0:v]} refers to the first video stream in the first input file,
1102which is linked to the first (main) input of the overlay filter. Similarly the
1103first video stream in the second input is linked to the second (overlay) input
1104of overlay.
1105
1106Assuming there is only one video stream in each input file, we can omit input
1107labels, so the above is equivalent to
1108@example
1109ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map
1110'[out]' out.mkv
1111@end example
1112
1113Furthermore we can omit the output label and the single output from the filter
1114graph will be added to the output file automatically, so we can simply write
1115@example
1116ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
1117@end example
1118
1119To generate 5 seconds of pure red video using lavfi @code{color} source:
1120@example
1121ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
1122@end example
1123
1124@item -lavfi @var{filtergraph} (@emph{global})
1125Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1126outputs. Equivalent to @option{-filter_complex}.
1127
1128@item -filter_complex_script @var{filename} (@emph{global})
1129This option is similar to @option{-filter_complex}, the only difference is that
1130its argument is the name of the file from which a complex filtergraph
1131description is to be read.
1132
1133@item -accurate_seek (@emph{input})
1134This option enables or disables accurate seeking in input files with the
1135@option{-ss} option. It is enabled by default, so seeking is accurate when
1136transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
1137e.g. when copying some streams and transcoding the others.
1138
1139@item -override_ffserver (@emph{global})
1140Overrides the input specifications from @command{ffserver}. Using this
1141option you can map any input stream to @command{ffserver} and control
1142many aspects of the encoding from @command{ffmpeg}. Without this
1143option @command{ffmpeg} will transmit to @command{ffserver} what is
1144requested by @command{ffserver}.
1145
1146The option is intended for cases where features are needed that cannot be
1147specified to @command{ffserver} but can be to @command{ffmpeg}.
1148
1149@item -discard (@emph{input})
1150Allows discarding specific streams or frames of streams at the demuxer.
1151Not all demuxers support this.
1152
1153@table @option
1154@item none
1155Discard no frame.
1156
1157@item default
1158Default, which discards no frames.
1159
1160@item noref
1161Discard all non-reference frames.
1162
1163@item bidir
1164Discard all bidirectional frames.
1165
1166@item nokey
1167Discard all frames excepts keyframes.
1168
1169@item all
1170Discard all frames.
1171@end table
1172
1173@end table
1174
1175As a special exception, you can use a bitmap subtitle stream as input: it
1176will be converted into a video with the same size as the largest video in
1177the file, or 720x576 if no video is present. Note that this is an
1178experimental and temporary solution. It will be removed once libavfilter has
1179proper support for subtitles.
1180
1181For example, to hardcode subtitles on top of a DVB-T recording stored in
1182MPEG-TS format, delaying the subtitles by 1 second:
1183@example
1184ffmpeg -i input.ts -filter_complex \
1185  '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \
1186  -sn -map '#0x2dc' output.mkv
1187@end example
1188(0x2d0, 0x2dc and 0x2ef are the MPEG-TS PIDs of respectively the video,
1189audio and subtitles streams; 0:0, 0:3 and 0:7 would have worked too)
1190
1191@section Preset files
1192A preset file contains a sequence of @var{option}=@var{value} pairs,
1193one for each line, specifying a sequence of options which would be
1194awkward to specify on the command line. Lines starting with the hash
1195('#') character are ignored and are used to provide comments. Check
1196the @file{presets} directory in the FFmpeg source tree for examples.
1197
1198Preset files are specified with the @code{vpre}, @code{apre},
1199@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
1200filename of the preset instead of a preset name as input and can be
1201used for any kind of codec. For the @code{vpre}, @code{apre}, and
1202@code{spre} options, the options specified in a preset file are
1203applied to the currently selected codec of the same type as the preset
1204option.
1205
1206The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
1207preset options identifies the preset file to use according to the
1208following rules:
1209
1210First ffmpeg searches for a file named @var{arg}.ffpreset in the
1211directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1212the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
1213or in a @file{ffpresets} folder along the executable on win32,
1214in that order. For example, if the argument is @code{libvpx-1080p}, it will
1215search for the file @file{libvpx-1080p.ffpreset}.
1216
1217If no such file is found, then ffmpeg will search for a file named
1218@var{codec_name}-@var{arg}.ffpreset in the above-mentioned
1219directories, where @var{codec_name} is the name of the codec to which
1220the preset file options will be applied. For example, if you select
1221the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
1222then it will search for the file @file{libvpx-1080p.ffpreset}.
1223@c man end OPTIONS
1224
1225@chapter Tips
1226@c man begin TIPS
1227
1228@itemize
1229@item
1230For streaming at very low bitrates, use a low frame rate
1231and a small GOP size. This is especially true for RealVideo where
1232the Linux player does not seem to be very fast, so it can miss
1233frames. An example is:
1234
1235@example
1236ffmpeg -g 3 -r 3 -t 10 -b:v 50k -s qcif -f rv10 /tmp/b.rm
1237@end example
1238
1239@item
1240The parameter 'q' which is displayed while encoding is the current
1241quantizer. The value 1 indicates that a very good quality could
1242be achieved. The value 31 indicates the worst quality. If q=31 appears
1243too often, it means that the encoder cannot compress enough to meet
1244your bitrate. You must either increase the bitrate, decrease the
1245frame rate or decrease the frame size.
1246
1247@item
1248If your computer is not fast enough, you can speed up the
1249compression at the expense of the compression ratio. You can use
1250'-me zero' to speed up motion estimation, and '-g 0' to disable
1251motion estimation completely (you have only I-frames, which means it
1252is about as good as JPEG compression).
1253
1254@item
1255To have very low audio bitrates, reduce the sampling frequency
1256(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
1257
1258@item
1259To have a constant quality (but a variable bitrate), use the option
1260'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
1261quality).
1262
1263@end itemize
1264@c man end TIPS
1265
1266@chapter Examples
1267@c man begin EXAMPLES
1268
1269@section Preset files
1270
1271A preset file contains a sequence of @var{option=value} pairs, one for
1272each line, specifying a sequence of options which can be specified also on
1273the command line. Lines starting with the hash ('#') character are ignored and
1274are used to provide comments. Empty lines are also ignored. Check the
1275@file{presets} directory in the FFmpeg source tree for examples.
1276
1277Preset files are specified with the @code{pre} option, this option takes a
1278preset name as input.  FFmpeg searches for a file named @var{preset_name}.avpreset in
1279the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1280the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
1281in that order.  For example, if the argument is @code{libx264-max}, it will
1282search for the file @file{libx264-max.avpreset}.
1283
1284@section Video and Audio grabbing
1285
1286If you specify the input format and device then ffmpeg can grab video
1287and audio directly.
1288
1289@example
1290ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
1291@end example
1292
1293Or with an ALSA audio source (mono input, card id 1) instead of OSS:
1294@example
1295ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
1296@end example
1297
1298Note that you must activate the right video source and channel before
1299launching ffmpeg with any TV viewer such as
1300@uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
1301have to set the audio recording levels correctly with a
1302standard mixer.
1303
1304@section X11 grabbing
1305
1306Grab the X11 display with ffmpeg via
1307
1308@example
1309ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
1310@end example
1311
13120.0 is display.screen number of your X11 server, same as
1313the DISPLAY environment variable.
1314
1315@example
1316ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
1317@end example
1318
13190.0 is display.screen number of your X11 server, same as the DISPLAY environment
1320variable. 10 is the x-offset and 20 the y-offset for the grabbing.
1321
1322@section Video and Audio file format conversion
1323
1324Any supported file format and protocol can serve as input to ffmpeg:
1325
1326Examples:
1327@itemize
1328@item
1329You can use YUV files as input:
1330
1331@example
1332ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
1333@end example
1334
1335It will use the files:
1336@example
1337/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
1338/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
1339@end example
1340
1341The Y files use twice the resolution of the U and V files. They are
1342raw files, without header. They can be generated by all decent video
1343decoders. You must specify the size of the image with the @option{-s} option
1344if ffmpeg cannot guess it.
1345
1346@item
1347You can input from a raw YUV420P file:
1348
1349@example
1350ffmpeg -i /tmp/test.yuv /tmp/out.avi
1351@end example
1352
1353test.yuv is a file containing raw YUV planar data. Each frame is composed
1354of the Y plane followed by the U and V planes at half vertical and
1355horizontal resolution.
1356
1357@item
1358You can output to a raw YUV420P file:
1359
1360@example
1361ffmpeg -i mydivx.avi hugefile.yuv
1362@end example
1363
1364@item
1365You can set several input files and output files:
1366
1367@example
1368ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
1369@end example
1370
1371Converts the audio file a.wav and the raw YUV video file a.yuv
1372to MPEG file a.mpg.
1373
1374@item
1375You can also do audio and video conversions at the same time:
1376
1377@example
1378ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
1379@end example
1380
1381Converts a.wav to MPEG audio at 22050 Hz sample rate.
1382
1383@item
1384You can encode to several formats at the same time and define a
1385mapping from input stream to output streams:
1386
1387@example
1388ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2
1389@end example
1390
1391Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
1392file:index' specifies which input stream is used for each output
1393stream, in the order of the definition of output streams.
1394
1395@item
1396You can transcode decrypted VOBs:
1397
1398@example
1399ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi
1400@end example
1401
1402This is a typical DVD ripping example; the input is a VOB file, the
1403output an AVI file with MPEG-4 video and MP3 audio. Note that in this
1404command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
1405GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
1406input video. Furthermore, the audio stream is MP3-encoded so you need
1407to enable LAME support by passing @code{--enable-libmp3lame} to configure.
1408The mapping is particularly useful for DVD transcoding
1409to get the desired audio language.
1410
1411NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
1412
1413@item
1414You can extract images from a video, or create a video from many images:
1415
1416For extracting images from a video:
1417@example
1418ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
1419@end example
1420
1421This will extract one video frame per second from the video and will
1422output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
1423etc. Images will be rescaled to fit the new WxH values.
1424
1425If you want to extract just a limited number of frames, you can use the
1426above command in combination with the -vframes or -t option, or in
1427combination with -ss to start extracting from a certain point in time.
1428
1429For creating a video from many images:
1430@example
1431ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
1432@end example
1433
1434The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
1435composed of three digits padded with zeroes to express the sequence
1436number. It is the same syntax supported by the C printf function, but
1437only formats accepting a normal integer are suitable.
1438
1439When importing an image sequence, -i also supports expanding
1440shell-like wildcard patterns (globbing) internally, by selecting the
1441image2-specific @code{-pattern_type glob} option.
1442
1443For example, for creating a video from filenames matching the glob pattern
1444@code{foo-*.jpeg}:
1445@example
1446ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
1447@end example
1448
1449@item
1450You can put many streams of the same type in the output:
1451
1452@example
1453ffmpeg -i test1.avi -i test2.avi -map 0:3 -map 0:2 -map 0:1 -map 0:0 -c copy test12.nut
1454@end example
1455
1456The resulting output file @file{test12.avi} will contain first four streams from
1457the input file in reverse order.
1458
1459@item
1460To force CBR video output:
1461@example
1462ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
1463@end example
1464
1465@item
1466The four options lmin, lmax, mblmin and mblmax use 'lambda' units,
1467but you may use the QP2LAMBDA constant to easily convert from 'q' units:
1468@example
1469ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
1470@end example
1471
1472@end itemize
1473@c man end EXAMPLES
1474
1475@include config.texi
1476@ifset config-all
1477@ifset config-avutil
1478@include utils.texi
1479@end ifset
1480@ifset config-avcodec
1481@include codecs.texi
1482@include bitstream_filters.texi
1483@end ifset
1484@ifset config-avformat
1485@include formats.texi
1486@include protocols.texi
1487@end ifset
1488@ifset config-avdevice
1489@include devices.texi
1490@end ifset
1491@ifset config-swresample
1492@include resampler.texi
1493@end ifset
1494@ifset config-swscale
1495@include scaler.texi
1496@end ifset
1497@ifset config-avfilter
1498@include filters.texi
1499@end ifset
1500@end ifset
1501
1502@chapter See Also
1503
1504@ifhtml
1505@ifset config-all
1506@url{ffmpeg.html,ffmpeg}
1507@end ifset
1508@ifset config-not-all
1509@url{ffmpeg-all.html,ffmpeg-all},
1510@end ifset
1511@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
1512@url{ffmpeg-utils.html,ffmpeg-utils},
1513@url{ffmpeg-scaler.html,ffmpeg-scaler},
1514@url{ffmpeg-resampler.html,ffmpeg-resampler},
1515@url{ffmpeg-codecs.html,ffmpeg-codecs},
1516@url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
1517@url{ffmpeg-formats.html,ffmpeg-formats},
1518@url{ffmpeg-devices.html,ffmpeg-devices},
1519@url{ffmpeg-protocols.html,ffmpeg-protocols},
1520@url{ffmpeg-filters.html,ffmpeg-filters}
1521@end ifhtml
1522
1523@ifnothtml
1524@ifset config-all
1525ffmpeg(1),
1526@end ifset
1527@ifset config-not-all
1528ffmpeg-all(1),
1529@end ifset
1530ffplay(1), ffprobe(1), ffserver(1),
1531ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
1532ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
1533ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
1534@end ifnothtml
1535
1536@include authors.texi
1537
1538@ignore
1539
1540@setfilename ffmpeg
1541@settitle ffmpeg video converter
1542
1543@end ignore
1544
1545@bye
1546