It has unknowingly reached the ninth article, and I feel like there is still so much to write, the road is long and has no end, I will search up and down. Let’s talk about benefits first, previously in about me,
I mentioned that I would hold activities from time to time, VIP, books, etc., don’t ask me where they come from,
the most important thing is to thank everyone for their support. If anyone wants to submit an article, you can also leave a message for me in the background. All rights belong to the original author, and various benefits will be prioritized for contributors. Sharing is not easy, well, no more nonsense, today is to send iQIYI VIP membership monthly cards, the method is very simple, leave a message below the article, the top three with the most likes (time: until tonight at 12:00), after 12 o’clock, VIP membership cards will be sent on time. This is still easy to obtain, let me give my own example: I changed my major in college, in my sophomore year I had to make up for my freshman courses while learning sophomore courses, I just made a little effort and got a second-class scholarship (although it was quite easy, I still failed a subject). I can’t even believe it myself, there is a joke that goes: when I go crazy, I even beat myself, haha. Not to that extent yet.
All of the above is off-topic, now let’s start today’s multimedia framework ninth article
The previous article mainly introduced the Stagefright framework and AwesomePlayer’s data parser. Finally, we said that the parse and decode parts will be introduced in this article. Let’s look at today’s agenda:
-
Two pictures to see the data flow
-
The prepare process in AwesomePlayer
-
The initialization process of audio and video decoders in AwesomePlayer
-
The Decode process of Stagefright
-
The data processing process of Stagefright
-
The flow from source to final decoded data
Two pictures to see the data flow
Everything comes from MediaSource and everything goes back to MediaSource. Audio:
Video:
The prepare process in AwesomePlayer
First, let’s take a look at the execution process of AwesomePlayer’s prepare:
The above code is summarized as follows: the prepare process calls the prepareAsync_l function, and in prepareAsync_l, it executes new AwesomeEvent, and returns the result of AwesomePlayer calling onPrepareAsyncEvent as a parameter to AwesomeEvent’s constructor. Next, let’s analyze the process of AwesomeEvent: it starts mQueue as an event handler
The above new AwesomeEvent will execute the onPrepareAsyncEvent function, let’s see what this function does?
The above code is summarized as follows: it will process AV (audio and video) separately, thus creating AwesomePlayer::initVideoDecoder and AwesomePlayer::initAudioDecoder() functions. This article is from the fish swimming upstream: http://blog.csdn.net/hejjunlin/article/details/52532085
The initialization process of audio and video decoders
First, let’s take a look at initVideoDecoder, which initializes the video decoder:
Next, let’s take a look at the initialization of the audio decoder and see the declaration of several variables:
Next, let’s look at the code as follows:
Summarizing the above code: after Stagefright calls AwesomePlayer’s prepare, AwesomePlayer calls its own prepareAsync to initialize the audio and video decoders, both of these methods will call OMXCodec::Create, let’s take a look at this process.
The Decode process of Stagefright
After the “data stream encapsulation”, the two MediaSource obtained are actually two OMXCodec. AwesomePlayer and mAudioPlayer both get data from MediaSource for playback. AwesomePlayer gets the original video data that needs to be rendered, while mAudioPlayer reads the original audio data that needs to be played. That is to say, the data read from the OMXCodec is already the original data. How does OMXCodec convert the data source into original data after parsing and decoding? Starting from the constructor method OMXCodec::Create, let’s look at its code:
The above code is summarized as follows: corresponding parameter analysis:
-
IOMX &omx refers to an instance of an OMXNodeInstance object.
-
MetaData &meta is obtained from MediaSource.getFormat. This object mainly contains a KeyedVector (uint32_t, typed_data) mItems, which stores some key-value pairs representing the MediaSource format information.
-
bool createEncoder indicates whether this OMXCodec is for encoding or decoding.
-
MediaSource &source is a MediaExtractor (data parser).
-
char *matchComponentName specifies a Codec used to generate this OMXCodec. First, use findMatchingCodecs to find the corresponding Codec, then allocate a node for the current IOMX and register the event listener: omx->allocateNode(componentName, observer, &node). Finally, encapsulate the IOMX into an OMXCodec:
Thus, we obtain the OMXCodec.
-
After AwesomePlayer obtains this OMXCodec, let’s take a look at initVideoDecoder/initAudioDecoder. Here, let’s look at the initAudioDecoder method, which assigns mAudioSource = mOmxSource, and then calls mAudioSource->start() for initialization. The main tasks of OMXCodec initialization are:
-
Send the start command to OpenMAX. mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle)
-
Call allocateBuffers() to allocate two buffers, stored in Vector mPortBuffers[2], for input and output.
-
Then in the current initxxxDecoder method, it will call (mAudioSource->start()/mVideoSource->start())
Triggering the start() method of the subclasses VideoSource and AudioSource of MediaSource will start fetching data from the data source internally, and stop when the buffer is full. In AwesomePlayer, the read method of MediaSource can be called to read the decoded data.
-
For mVideoSource, the read data: mVideoSource->read(&mVideoBuffer, &options) is sent to the display module for rendering, mVideoRenderer->render(mVideoBuffer);
-
For mAudioSource, it is encapsulated by mAudioPlayer, and then mAudioPlayer is responsible for reading data and playback control.
-
AwesomePlayer calls OMXCodec to read ES data and perform decoding processing
-
OMXCodec calls the read function of MediaSource to obtain audio and video data
-
OMXCodec calls Android’s IOMX interface, which is actually the OMX implementation in Stagefrightdecode
This process is the prepare process, the key point is that decoding puts the stream into Buffer. Next, when the java layer calls the start method, it is passed to StagefrightPlayer through mediaplayerservice, referencing AwesomePlayer, thus calling AwesomePlayer’s play method, see the code:
-
After AwesomePlayer calls play, it reads data through mVideoSource->read(&mVideoBuffer, &options). The specific call is to OMXCodec.read to read data. The OMXCodec.read mainly implements data reading in two steps:
-
(1) By calling drainInputBuffers() to fill mPortBuffers[kPortIndexInput], this step completes parsing. OpenMAX reads demuxed data from the data source into the input buffer as OpenMAX’s input.
-
(2) By filling mPortBuffers[kPortIndexOutput] through fillOutputBuffers(), this step completes decoding. OpenMAX decodes the data in the input buffer and outputs the decoded video data to the output buffer. AwesomePlayer renders the parsed and decoded data through mVideoRenderer->render(mVideoBuffer). A mVideoRenderer is actually a wrapper around the IOMXRenderer, called AwesomeRemoteRenderer:
The data processing process of Stagefright
-
Audioplayer is a member of AwesomePlayer, audioplayer drives data acquisition through callback, while awesomeplayer is driven by videoevent. Both have a commonality, which is that data acquisition is abstracted as mSource->read() to complete, and read internally binds parse and decode together. The AV synchronization part of Stagefright, audio is completely callback-driven data flow, note that the video part will obtain the audio timestamp in onVideoEvent, which is the traditional AV timestamp for synchronization.
-
AwesomePlayer’s Video mainly has the following members:
-
mVideoSource (decodes video)
-
mVideoTrack (reads video data from multimedia files)
-
mVideoRenderer (converts decoded video into format, the format used by android is RGB565)
-
mISurface (redrawing layer)
-
mQueue (event event queue)
-
The audio process of stagefright during runtime is as follows:
-
First, set the path of mUri
-
Start mQueue, create a thread to run threadEntry (named TimedEventQueue, this thread is the event scheduler)
-
Open the header of the file specified by mUri, and select different demultiplexers according to the type (such as MPEG4Extractor)
-
Use MPEG4Extractor to separate audio and video tracks from MP4 and return the MPEG4Source type video track to mVideoTrack
-
According to the encoding type in mVideoTrack, select the decoder, the avc encoding type will choose AVCDecoder and return it to mVideoSource, and set mVideoSource’s mSource to mVideoTrack
-
Insert onVideoEvent into the Queue to start decoding playback
-
Read the parsed video buffer through the mVideoSource object
If the parsed buffer has not reached the AV timestamp synchronization moment, it will be postponed to the next operation
1. If mVideoRenderer is empty, initialize it (if not using OMX, mVideoRenderer will be set to AwesomeLocalRenderer) 2. Convert the parsed video buffer into RGB565 format using the mVideoRenderer object and send it to the display module for image drawing 3. Reinsert onVideoEvent into the event scheduler to loop
This article is from the fish swimming upstream: http://blog.csdn.net/hejjunlin/article/details/52532085
The flow from source to final decoded data
You can refer to the diagram in “Android Multimedia Framework Summary (Eight) Stagefright Framework AwesomePlayer and Data Parser”, I won’t post it here.
-
Set the DataSource, the data source can be two types URI and FD. URI can be http://, rtsp://, etc. FD is a local file descriptor, which can find the corresponding file through FD.
-
Generate MediaExtractor from DataSource. This is implemented by sp extractor = MediaExtractor::Create(dataSource); MediaExtractor::Create(dataSource) will create different data reading objects according to different data content.
-
By calling setVideoSource, MediaExtractor decomposes and generates audio data stream (mAudioTrack) and video data stream (mVideoTrack).
-
onPrepareAsyncEvent() if the DataSource is a URL, it will obtain data based on the address and start buffering until mVideoTrack and mAudioTrack are obtained. mVideoTrack and mAudioTrack generate mVideoSource and mAudioSource, these two audio and video decoders by calling initVideoDecoder() and initAudioDecoder(). Then call postBufferingEvent_l() to submit the event to start buffering.
-
The execution function of data buffering is onBufferingUpdate(). When there is enough data in the buffer to play, call play_l() to start playback. The key point in play_l() is to call postVideoEvent_l() to submit mVideoEvent. When this event executes, it will call the function onVideoEvent(). This function decodes video by calling mVideoSource->read(&mVideoBuffer, &options). Audio decoding is implemented by mAudioPlayer.
-
The video decoder decodes and reads frame-by-frame data through mVideoSource->read, putting it into mVideoBuffer, and finally sending the video data to the display module through mVideoRenderer->render(mVideoBuffer). When you need to pause or stop, call cancelPlayerEvents to submit events to stop decoding, and you can also choose whether to continue buffering data.
Get blog update reminders and moreAndroid dry goods, source code analysis, welcome to follow my WeChat public account, scan the QR code below or press and hold to identify the QR code to follow.