About 2x faster on my 4-cores ARM server, without any significant parallelism overhead:
$ time ffmpeg_threading/ffmpeg -i input.mp4 -ar 1000 -vn -acodec flac -f flac -y /dev/null -hide_banner -loglevel quiet
14.90s user 2.08s system 218% cpu 7.771 total
$ time ffmpeg -i input.mp4 -ar 1000 -vn -acodec flac -f flac -y /dev/null -hide_banner -loglevel quiet
14.05s user 1.80s system 114% cpu 13.841 total
You are not using hardware acceleration on the decoding side, and removing video output here. I wonder what happens if we use both hardware acceleration on video decoding and encoding, i.e. something like this on NVIDIA card
ffmpeg -hwaccel cuda -i $inputFile -codec:a copy -codec:v hevc_nvenc $output
What's to note about hardware acceleration on the transcoding with NVENC is that it actually has a resolution limit, so if you're trying to transcode something like 8K VR video using that it'll choke
But what part gets multi threading? Because the video compression is already multithreaded. Video decompression I am not sure. And I think anything else is fairly small in comparison in term of performance cost. All improvements are welcome but I would expect the impact to be fairly immaterial in practice.
Well, that's the very specific command I'm using in one of my webapps (https://datethis.app), and it's one of the main performance hotspots, so it's very *not* immaterial.
This is removing the video stream (-vn) so that's not involved. Not sure which parts are in parallel here, but I'm guessing decoding and encoding the audio.
Threading depends on implementation of each encoder/decoder - most video encoders and decoders are multithreaded, audio ones not so much. At least that was the state of the world the last time I've looked into ffmpeg internals.
Video compression (at least x264/x265) has a maximum number of threads it can use depending on the video resolution. This means that e.g. for 1080p ffmpeg cannot fully utilize a 64-thread CPU.
Nice, great presentation! Curious what he has in mind for the "dynamic pipelines" and "scripting (Lua?)" he mentions in the "Future directions" section. I'm imagining something more powerful for animating properties?
Man, i really want to watch this presentation, but the piss poor audio just causes my brain to have a fit. How in today's time is this still possible to screw up so badly?
Agreed. Maybe some are not as sensitive to this, but it is a major energy suck for me. A little post-processing on noise and compression would come a long way. This recording is as raw as the Ramsey meme.
What are you talking about... the audio might not professional studio level 10/10, but I don't see anything significantly wrong with it - given that its more like a standard presentation mic. Its clearly good enough.
His work is not primarily about multithreading but about cleaning up ffmpeg to be true to its own architecture so that normal human beings have a chance of being able to maintain it. Things like making data flow one way in a pipeline, separating public and private state and having clearly defined interfaces.
Things had got so bad that every change was super difficult to make.
Multithreading comes out as a natural benefit of the cleanup.
> "multi-threading it's not really all about just multi-threading - I will say more about that later - but that is the marketable term"
That's whats said in the video at least in the first 10 seconds so it might be that multi-threading is just a too trivial term for the work here. (But haven't watched the video yet so just an observation.)
There are many different types of pipelined processing tasks with many differing kinds of threading approaches, and I guess the video clears up what kinds of approaches work best with transcoding ..
Are there any video editing software that take advantage of ffmpeg? I once thought about making something to draw geometry through SVG and use ffmpeg then, or maybe add some UI or whatever, or just to add text, but I never started.
Avidemux feels like it's a bit that.
Since ffmpeg internals are quite raw and not written to be accessed through a GUI, any video editor based on it would probably be quite clunky and weird and hard to maintain.
Maybe an editor that use modules that just build some kind of preview with an command explainer, or some pipeline viewer.
ffmpeg is quite powerful, but it's a bit stuck because it only works with a command line, which is fine, but I guess it somehow prevents it from being used by some people.
I've already written a python script to take a random amount of clips, and build a mosaic with the xstack filter. It was not easy.
VapourSynth is intended to be a middle ground you might be seeking. You manipulate the video in Python instead of ffmpeg's CLI, but its often more extensible and powerful than pure ffmpeg due to the extensions:
I’ve been looking for more ways to speedup the transcoding process, one solution I found was using gpu acceleration, another was using more threads but its hard to find the optimal amount I should provide.
Can't you just use Hyperparameter Optimization to find the best value? Tools like Sherpa or Scikit-optimize can be used to explore a search space of n-threads/types of input/CPU type (which might be fixed on your machine).
I don't think "just" is appropriate here, that makes it sound like this should be a trivial task for anyone while it is not. Using "just" like this minimizes work and makes people feel stupid which leads to various negative outcomes.
For most workloads, setting the number of threads to the number of vCPUs (i.e. count each hyperthreaded core as 2) works. But GPU acceleration is much better if it's available to you.
Though in my tests I found that gpu acceleration of video decoding actually hurts performance. It seems software decoding is faster than hardware for some codecs. Of course not the case for encoding.
If they have started to solve that problem then I will be a much happier camper
https://ffmpeg.org/pipermail/ffmpeg-devel/2023-November/3165...
instead of the tweet (or the xit, or whatever they are called now), as the substance in the tweet is the link.
Just "post". They're no longer limited to 280 characters now either, although longer posts are collapsed by default.
Dead Comment
Deleted Comment
Things had got so bad that every change was super difficult to make.
Multithreading comes out as a natural benefit of the cleanup.
That's whats said in the video at least in the first 10 seconds so it might be that multi-threading is just a too trivial term for the work here. (But haven't watched the video yet so just an observation.)
.. which in turn references code at https://git.khirnov.net/libav.git/log/?h=ffmpeg_threading
Avidemux feels like it's a bit that.
Since ffmpeg internals are quite raw and not written to be accessed through a GUI, any video editor based on it would probably be quite clunky and weird and hard to maintain.
Maybe an editor that use modules that just build some kind of preview with an command explainer, or some pipeline viewer.
ffmpeg is quite powerful, but it's a bit stuck because it only works with a command line, which is fine, but I guess it somehow prevents it from being used by some people.
I've already written a python script to take a random amount of clips, and build a mosaic with the xstack filter. It was not easy.
https://vsdb.top/
https://www.vapoursynth.com/
I have seen some niche software built on ffmpeg like losslesscut:
https://github.com/mifi/lossless-cut
Staxrip is also big:
https://github.com/staxrip/staxrip
But I don't know anything "comprehensive."
Deleted Comment
Deleted Comment
Hopefully this will make my small transcoding needs faster for plex (as I don't have hardware transcoding support on my graphics card) =D
Sorry for lecturing