Android Framework學習筆記 -- Audio的播放流程

流程圖

這是基於Android5.1分析的,前幾版本好像有些不同,6.0沒改太多,不過大體思想是一致的

結構圖

播放就像個排水機,AuidoPolicyService是閥門,AudioFlinger是排水池,PlaybackThread是發動機,Track是源,AudioOutput是排水孔。AudioTrack是水桶

排水首先要鑿個孔(openOutput),然後添加發動機(建立PlaybackThread),然後將源接到水桶上(建立Track),選擇排水孔(selectOutptu),開啓相應的發動機(PlaybackThread從睡眠中喚醒),然後就各自排水了。。。。

總體流程圖

AudioTrack服務端的啓動及準備

服務端的指的是AudioFlinger跟AudioPolicyService等音頻相關的服務,這些服務會在系統開機的時候啓動
在系統啓動完成後,客戶端(一般都是app)就能利用這些服務來使用系統提供的功能

Audio相關的服務啓動

開機時系統啓動各種服務,AudioFlinger跟AudioPolicyService和一些音頻相關的服務會在此啓動。
各服務的Instantiate()函數在BinderService.h中定義實現,主要是用於抽象出注冊服務的操作。
BinderService是個模板類,服務繼承該類後可以直接註冊到systemserver。

//--->frameworks/av/media/mediaserver/main_mediaserver.cpp
int main(int argc __unused, char** argv)
{
    ...
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    AudioPolicyService::instantiate();
    ...
}

//--->frameworks/native/include/binder/BinderService.h
template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        // 這裏用模板生成了具體服務的對象
        // new SERVICE()將會調用服務(AudioFlinger,AudioPolicyService等)的構造函數
        return sm->addService(
                String16(SERVICE::getServiceName()),
                new SERVICE(), allowIsolated);
    }
    ...
    static void instantiate() { publish(); }
    ...

};

AudioFlinger的創建

AudioFlinger承擔混音工作。(總之很重要啦)
AudioFlinger的構造函數主要是對成員變量和調試工具的初始化。
onFirstRef一般做進一步的初始化工作,AudioFlinger暫時沒有在該函數中做重要的工作。

//--->frameworks/av/services/audioflinger.cpp
AudioFlinger::AudioFlinger()
    : BnAudioFlinger(),
      mPrimaryHardwareDev(NULL),
      mAudioHwDevs(NULL),
      mHardwareStatus(AUDIO_HW_IDLE),
      mMasterVolume(1.0f),
      mMasterMute(false),
      mNextUniqueId(1),
      mMode(AUDIO_MODE_INVALID),
      mBtNrecIsOff(false),
      mIsLowRamDevice(true),
      mIsDeviceTypeKnown(false),
      mGlobalEffectEnableTime(0),
      mPrimaryOutputSampleRate(0)
{
...
#ifdef TEE_SINK
    ....
#endif
...
}

void AudioFlinger::onFirstRef()
{
    Mutex::Autolock _l(mLock);
    ...
    mPatchPanel = new PatchPanel(this);
    mMode = AUDIO_MODE_NORMAL;
}

AudioPolicyService的創建

AudioPolicyService用於控制音頻播放策略(比如插耳機的時候來電用什麼設備去播放音樂)、管理音頻設備等

AudioPolicyService的構造函數更簡單,只是初始化主要成員。
AudioPolicyService會在onFristRef中做比較多的工作,比如創建command線程,初始化重要成員mAudioPolicyManager。

//--->frameworks/av/services/audiopolicy/AudioPolicyService.cpp
AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService(),
    mpAudioPolicyDev(NULL),
    mpAudioPolicy(NULL),
    mAudioPolicyManager(NULL),
    mAudioPolicyClient(NULL),
    mPhoneState(AUDIO_MODE_INVALID)
{}

void AudioPolicyService::onFirstRef()
{
    ...
    {
        Mutex::Autolock _l(mLock);
        mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);

#ifdef USE_LEGACY_AUDIO_POLICY
    ...(暫時這宏意義不明)
#else
        mAudioPolicyClient = new AudioPolicyClient(this);
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
#endif
    }
    ...(效果相關)
}

AudioPolicyManager的創建

AudioPolicyManager作爲音頻調度策略的實現,在AudioPolicyService關於音頻調度的基本都是直接轉發給AudioPolicyManager。
(貌似可以重載AudioPolicyManager來改動音頻策略的實現,6.0開始可以直接動態選擇不同的AudiPolicyManger實現)

在構造函數中,打開了所有能用的音頻設備和錄音設備,並調用AudioPolicyService創建了相應設備的混音線程。

//--->frameworks/av/services/audiopolicy/AudioPolicyManager.cpp
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
    :mPrimaryOutput((audio_io_handle_t)0),
    ...

{
    mpClientInterface = clientInterface;
    ...
    // 加載音頻模塊
    defaultAudioPolicyConfig();
    ...
    for (size_t i = 0; i < mHwModules.size(); i++) {
        ...
        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++) {
            ...
            status_t status = mpClientInterface->openOutput(outProfile->mModule->mHandle,  // 打開音頻設備
                &output,
                &config,
                &outputDesc->mDevice,
                String8(""),
                &outputDesc->mLatency,
                outputDesc->mFlags);
        }
    }

    ...(打開錄音設備)
}

mpClientInterface就是AudioPolicyService,AudioPolicyService最終會調用AudioFlinger的openOutput函數。

//--->frameworks/av/services/audiopolicy/AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
    audio_io_handle_t *output,
    audio_config_t *config,
    audio_devices_t *devices,
    const String8& address,
    uint32_t *latencyMs,
    audio_output_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

AudioFlinger的openOutput中會針對輸出設備的類型創建了一個PlaybackThread。
PlaybackThread在AudioFlinger相當重要,相當於音頻系統的發動機

PlaybackThread有幾種,比較常見有MixerThread,藍牙耳機設備需要外放(比如ring類型的流需要同時從耳機與喇叭出來)的時候使用DuplicatingThread。

//--->frameworks/av/services/audioflinger/AudioFlinger.cpp
status_t AudioFlinger::openOutput(audio_module_handle_t module,
    audio_io_handle_t *output,
    audio_config_t *config,
    audio_devices_t *devices,
    const String8& address,
    uint32_t *latencyMs,
    audio_output_flags_t flags){
    ...
    sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
    ...
}

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
    audio_io_handle_t *output,
    audio_config_t *config,
    audio_devices_t devices,
    const String8& address,
    audio_output_flags_t flags)
{
    ...
    status_t status = hwDevHal->open_output_stream(
        hwDevHal,
        *output,
        devices,
        flags,
        config,
        &outStream,
        address.string());
    ...
    // 根據flags 創建相應的thread
    AudioStreamOut *outputStream = new AudioStreamOut(outHwDev, outStream, flags);
    PlaybackThread *thread;
    if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
        thread = new OffloadThread(this, outputStream, *output, devices);  
    } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
            || !isValidPcmSinkFormat(config->format)
            || !isValidPcmSinkChannelMask(config->channel_mask)) {
        thread = new DirectOutputThread(this, outputStream, *output, devices);
    } else {
        thread = new MixerThread(this, outputStream, *output, devices);
    }
    ...
}

PlaybackThread創建完之後在onFirstRef()中會自己啓動起來,並會在threadLoop等待一個AudioTrack的連接。

//--->frameworks/av/services/audioflinger/Threads.cpp
AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
        audio_io_handle_t id, audio_devices_t device, type_t type)
    :   PlaybackThread(audioFlinger, output, id, device, type),
        mFastMixerFutex(0)
{...(主要初始化了Mixer)}

AudioFlinger::PlaybackThread::PlaybackThread(const sp<AudioFlinger>& audioFlinger,
    AudioStreamOut* output,
    audio_io_handle_t id,
    audio_devices_t device,
    type_t type)
    :   ThreadBase(audioFlinger, id, device, AUDIO_DEVICE_NONE, type),
    ...
{...(初始化了音量相關的參數和獲取輸出設備的參數)}

// 在此PlaybackThread會跑起來,運行threadLoop()
void AudioFlinger::PlaybackThread::onFirstRef()
{
    run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
}

// threadLoop是整個AudioFlinger的核心,混音的工作在此進行
bool AudioFlinger::PlaybackThread::threadLoop()
{
    ...
    cacheParameters_l();
    ...
    checkSilentMode_l();
    while (!exitPending())
    {
        ...
        processConfigEvents_l();
        ...
        size_t size = mActiveTracks.size();
        for (size_t i = 0; i < size; i++) {
            sp<Track> t = mActiveTracks[i].promote();
            if (t != 0) {
                mLatchD.mFramesReleased.add(t.get(),
                        t->mAudioTrackServerProxy->framesReleased());
            }
        }
        ...
        saveOutputTracks();
        ...
        threadLoop_standby();
        ...
        clearOutputTracks();
        ...
        checkSilentMode_l();
        ...
        mMixerStatus = prepareTracks_l(&tracksToRemove);
        ...
        // 混音,主要設置AudioMixer的參數
        threadLoop_mix();
        ...
        ssize_t ret = threadLoop_write();
        ...
        threadLoop_drain();
        ...
        threadLoop_removeTracks(tracksToRemove);
        tracksToRemove.clear();
    }
    threadLoop_exit();
    ...
}

到此各種線程服務準備完畢,可以播放AudioTrack了

客戶端播放AudioTrack

Android SDK向外提供了MediaPlayer和比較底層的AudioTrack接口,MediaPlayer做一些解碼的工作,最終還是會使用到AudioTrack。

AudioTrack的構建

調用從app開始,首先是AudioTrack(java)的構造函數,會調用jni的native_setup

//--->frameworks/base/media/java/android/media/AudioTrack.java
 public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
            int mode, int sessionId)throws IllegalArgumentException {
    ...(參數檢查)
    mStreamType = AudioSystem.STREAM_DEFAULT;
    int[] session = new int[1];
    session[0] = sessionId;
    // native initialization
    int initResult = native_setup(new WeakReference<AudioTrack>(this), mAttributes,
            mSampleRate, mChannels, mAudioFormat,
            mNativeBufferSizeInBytes, mDataLoadMode, session);
    ...
}

jni對應的函數是android_media_AudioTrack_setup,生成一個AudioTrack(c++)並設置其參數

//--->android/frameworks/base/core/jni/android_media_AudioTrack.cpp
static jint android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
    jobject jaa,
    jint sampleRateInHertz, jint javaChannelMask,
    jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {

    ...(檢查參數)
    sp<AudioTrack> lpTrack = new AudioTrack();
    ...
    // 看樣子很重要
    AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();

    // 分不同的模式設置track
    switch (memoryMode) {
    case MODE_STREAM:
        status = lpTrack->set(...);
        break;
    case MODE_STATIC:
        // AudioTrack is using shared memory
        status = lpTrack->set(...);
        break;
    }
    ...(錯誤處理)
}

AudioTrack的無參構造函數很簡單,主要工作還是放在set裏面

//--->frameworks/av/media/libmedia/AudioTrack.cpp
AudioTrack::AudioTrack()
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mAttributes.content_type = AUDIO_CONTENT_TYPE_UNKNOWN;
    mAttributes.usage = AUDIO_USAGE_UNKNOWN;
    mAttributes.flags = 0x0;
    strcpy(mAttributes.tags, "");
}

AudioTrack的set的工作很多,服務端的track其實是在此時建立的。

//--->frameworks/av/media/libmedia/AudioTrack.cpp
status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        const sp<IMemory>& sharedBuffer,
        bool threadCanCallJava,
        int sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes)
{
    ...(設置參數等)
    if (cbf != NULL) {
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
    }


    status_t status = createTrack_l();
    ...(錯誤處理)
}

這裏插入一個track選擇output的一個過程,在工作中經常碰到這個鬼東西。在創建track的時候其實音頻的路由已經定好了,之前還一直以爲在start後選,在createTrack之前,會調用getOutputForAttr來獲取當前的流對應的output(後面有空會補一下output跟device及stream的亂七八糟的關係)

//--->frameworks/av/media/libmedia/AudioTrack.cpp
status_t AudioTrack::createTrack_l()
{
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    ...(計算FrameCount,Latency等

    // 這裏的output即到底層的通路,可以用adb shell dumpsys media.audio_policy 看到所有的output
    status_t status = AudioSystem::getOutputForAttr(attr, &output,
                                   (audio_session_t)mSessionId, &streamType, mClientUid,
                                   mSampleRate, mFormat, mChannelMask,
                                   mFlags, mSelectedDeviceId, mOffloadInfo);

    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,  //創建服務端的track
        mSampleRate,
        format,
        mChannelMask,
        &temp,
        &trackFlags,
        mSharedBuffer,
        output,
        tid,
        &mSessionId,
        mClientUid,
        &status);

    ...(debug代碼等)
    // AudioTrackClientProxy主要實現管理cblk和服務端通信
    mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    ...
}

// AudioSystem轉發給AudiopolicyService
//--->frameworks/av/media/libmedia/AudioSystem.cpp
status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,
                                        audio_io_handle_t *output,
                                        audio_session_t session,
                                        audio_stream_type_t *stream,
                                        uid_t uid,
                                        uint32_t samplingRate,
                                        audio_format_t format,
                                        audio_channel_mask_t channelMask,
                                        audio_output_flags_t flags,
                                        audio_port_handle_t selectedDeviceId,
                                        const audio_offload_info_t *offloadInfo)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getOutputForAttr(attr, output, session, stream, uid,
                                 samplingRate, format, channelMask,
                                 flags, selectedDeviceId, offloadInfo);
}

// AudioPolicyService 轉發給AudioPolicyManager
//--->/frameworks/av/services/audiopolicy/service/AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    ...
    if (IPCThreadState::self()->getCallingPid() != getpid_cached || uid == (uid_t)-1) {
        uid_t newclientUid = IPCThreadState::self()->getCallingUid();
        if (uid != (uid_t)-1 && uid != newclientUid) {
            ALOGW("%s uid %d tried to pass itself off as %d", __FUNCTION__, newclientUid, uid);
        }
        uid = newclientUid;
    }
    return mAudioPolicyManager->getOutputForAttr(attr, output, session, stream, uid, samplingRate,
                                    format, channelMask, flags, selectedDeviceId, offloadInfo);
}

// 這裏涉及到了流和策略及output的選擇,還是比較重要的吧
//--->frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    ...(流類型判斷)
    // 獲取路由策略
    routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
    // 獲取策略對應的設備
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
    ...(判斷flag)
    // 獲取對應設備的output
    *output = getOutputForDevice(device, session, *stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInfo);
    ...
    return NO_ERROR;
}

接下來的服務端AudioFlinger會調用PlaybackThread的createTrack_l創建Track。

//--->frameworks/av/services/audioflinger/AudioFlinger.cpp
sp<IAudioTrack> AudioFlinger::createTrack(
    audio_stream_type_t streamType,
    uint32_t sampleRate,
    audio_format_t format,
    audio_channel_mask_t channelMask,
    size_t *frameCount,
    IAudioFlinger::track_flags_t *flags,
    const sp<IMemory>& sharedBuffer,
    audio_io_handle_t output,
    pid_t tid,
    int *sessionId,
    int clientUid,
    status_t *status)
{
    ...(參數檢查)
    // 根據output選擇對應的thread
    PlaybackThread *thread = checkPlaybackThread_l(output);
    ...

    track = thread->createTrack_l(client, streamType, sampleRate, format,
            channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);
    ...
    // return handle to client
    trackHandle = new TrackHandle(track);

Exit:
    *status = lStatus;
    return trackHandle;
}

PlaybackThread會創建Track的實體對象

//--->android/frameworks/av/services/audioflinger/Threads.cpp
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
    const sp<AudioFlinger::Client>& client,
    audio_stream_type_t streamType,
    uint32_t sampleRate,
    audio_format_t format,
    audio_channel_mask_t channelMask,
    size_t *pFrameCount,
    const sp<IMemory>& sharedBuffer,
    int sessionId,
    IAudioFlinger::track_flags_t *flags,
    pid_t tid,
    int uid,
    status_t *status)
{
    ...(設置參數)
    if (!isTimed) {
        track = new Track(this, client, streamType, sampleRate, format,
                          channelMask, frameCount, NULL, sharedBuffer,
                          sessionId, uid, *flags, TrackBase::TYPE_DEFAULT);
    } else {
        track = TimedTrack::create(this, client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, sessionId, uid);
    }
    ...
}

(TODO這樣應有Track構建的分析)

到這一步,客戶端跟服務端的track都創建好了,就等着播放了。

AudioTrack的播放

AudioTrack(java)會調用play進行音頻播放的準備。

//--->frameworks/base/media/java/android/media/AudioTrack.java
public void play()
throws IllegalStateException {
    if (mState != STATE_INITIALIZED) {
        throw new IllegalStateException("play() called on uninitialized AudioTrack.");
    }
    if (isRestricted()) {
        setVolume(0);
    }
    synchronized(mPlayStateLock) {
        native_start();
        mPlayState = PLAYSTATE_PLAYING;
    }
}

play()會調用jni的native_start(),對應的函數是android_media_AudioTrack_start(), android_media_AudioTrack_start()只是做了轉發,最後會調用 AudioTrack(c++)的start(), AudioTrack的start又會調用TrackHandle(服務端的Track代理) 的start,最後會調用到服務端Track的start。

//--->frameworks/av/media/libmedia/AudioTrack.cpp
status_t AudioTrack::start()
{
    ...(標記位和參數的檢測)
    status = mAudioTrack->start();
    ...
}

TrackHandle僅僅做過轉發,最後會觸發PlaybackThread的addTrack_l()

//--->frameworks/av/services/audioflinger/Tracks.cpp
status_t AudioFlinger::TrackHandle::start() {
    return mTrack->start();
}

status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
                                                    int triggerSession __unused)
{
    ...
    status = playbackThread->addTrack_l(this);
    ...
}

PlaybackTread的addTrack_l主要工作是添加track到mActiveTracks,並激活沉睡的PlaybackTread。

//--->frameworks/av/services/audioflinger/Threads.cpp
status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
    ...
    mActiveTracks.add(track);
    ...
    onAddNewTrack_l();
}


void AudioFlinger::PlaybackThread::onAddNewTrack_l()
{
    ALOGV("signal playback thread");
    broadcast_l();
}

void AudioFlinger::PlaybackThread::broadcast_l()
{
    // Thread could be blocked waiting for async
    // so signal it to handle state changes immediately
    // If threadLoop is currently unlocked a signal of mWaitWorkCV will
    // be lost so we also flag to prevent it blocking on mWaitWorkCV
    mSignalPending = true;
    mWaitWorkCV.broadcast();
}

接下來就是寫入音頻數據 AudioTrack.java會調用write寫入音頻數據(播放聲音)

//--->frameworks/base/media/java/android/media/AudioTrack.java
public int write(byte[] audioData, int offsetInBytes, int sizeInBytes) {
    int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,
            true /*isBlocking*/);
}

native_write_byte()會調用jni的android_media_AudioTrack_write_byte,會調用jni的android_media_AudioTrack_write_byte,
主要是獲取java傳下來的數據, 並調用writeToTrack來向共享內存寫入數據,writeToTrack又分track是否爲stream或static來做不同的處理。

//--->android/frameworks/base/core/jni/android_media_AudioTrack.cpp
static jint android_media_AudioTrack_write_byte(JNIEnv *env,  jobject thiz,
    jbyteArray javaAudioData,
    jint offsetInBytes, jint sizeInBytes,
    jint javaAudioFormat,
    jboolean isWriteBlocking)
{
    ...
    cAudioData = (jbyte *)env->GetByteArrayElements(javaAudioData, NULL);
    ...
    jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes,
            isWriteBlocking == JNI_TRUE /* blocking */);
    ...
}

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
    } else {
        ...
        switch (format) {
        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            ...
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            ...
            } break;
        case AUDIO_FORMAT_PCM_8_BIT: {
            ...
            memcpy_to_i16_from_u8(dst, src, count);
            ...
            } break;

        }
    }
    return written;

}

其中stream類型的track會調用AudioTrack(c++)的write,AudioTrack會使用obtainBuffer獲取一塊共享內存,
並寫入數據,寫完後用releaseBuffer釋放共享內存。(就可以給AudioFlingr使用了)

//--->frameworks/av/media/libmedia/AudioTrack.cpp
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
    ...(參數檢查)
    while (userSize >= mFrameSize) {
          audioBuffer.frameCount = userSize / mFrameSize;
          status_t err = obtainBuffer(&audioBuffer,
                  blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
          ...(錯誤處理)
          ...(memcpy buffer -> audioBuffer);
          ...(計算剩餘數據)
          releaseBuffer(&audioBuffer);
      }

}

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, int32_t waitCount)
{
    ...(參數轉換跟計算)
    return obtainBuffer(audioBuffer, requested);
}

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested,
        struct timespec *elapsed, size_t *nonContig)
{
    ...(參數轉換)
    status = proxy->obtainBuffer(&buffer, requested, elapsed);
    ...(結果的填充)
}

void AudioTrack::releaseBuffer(Buffer* audioBuffer)
{
    ...
    mProxy->releaseBuffer(&buffer);
    ...
}

obtainBuffer跟releaseBuffer的具體實現交給了AudioTrackClientProxy來實現,主要是管理cblk對象與共享內存。應該深入研究一下。

服務端讀取共享內存的音頻數據是在PlaybackThread的threadLoop()中進行的,MixerThread也使用該函數,
不過重寫了threadLoop_mix()等關鍵函數(典型的多態)。

//--->frameworks/av/services/audioflinger/Threads.cpp
bool AudioFlinger::PlaybackThread::threadLoop()
{
    ...
    cacheParameters_l();
    ...
    acquireWakeLock();
    ...
    checkSilentMode_l();
    while (!exitPending()){
        ...(lock)
        processConfigEvents_l();
        ...
        saveOutputTracks();
        ...(wakelock wait,not understand,**mark**)
        ...
        threadLoop_standby(); // 準備音頻設備??
        ...(參數檢測)
        prepareTracks_l(&tracksToRemove);
        ...
        if (mBytesRemaining == 0) {
            mCurrentWriteLength = 0;
            if (mMixerStatus == MIXER_TRACKS_READY) {
                // threadLoop_mix() sets mCurrentWriteLength
                threadLoop_mix(); // 混音
            } ...(其他情況處理)
            ...(音效處理)
        }

        if (mBytesRemaining) {
              ssize_t ret = threadLoop_write(); // 寫到音頻設備
              if (ret < 0) {
                  mBytesRemaining = 0;
              } else {
                  mBytesWritten += ret;
                  mBytesRemaining -= ret;
              }
          }...(其他情況處理)
          // 播放完成,刪除已經播放的tracks
          threadLoop_removeTracks(tracksToRemove);
          tracksToRemove.clear();
          clearOutputTracks();
          effectChains.clear();
    }

    threadLoop_exit();
    ...
    releaseWakeLock();
    mWakeLockUids.clear();
    mActiveTracksGeneration++;
  }
}

這裏重點看一下 MixerThread的threadLoop_mix 和 threadLoop_write
threadLoop_mix調用了AudioMixer的process,threadLoop_write最終調用mOutput->stream->write寫到驅動裏去了

//--->frameworks/av/services/audioflinger/Threads.cpp
void AudioFlinger::MixerThread::threadLoop_mix()
{
    ...
    // mix buffers...
    mAudioMixer->process(pts);
    ...
}

ssize_t AudioFlinger::MixerThread::threadLoop_write()
{
    ...(處理一些fastmix的情況)
    return PlaybackThread::threadLoop_write();
}

ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
    ...(一些Sink操作)
    bytesWritten = mOutput->stream->write(mOutput->stream,
                                           (char *)mSinkBuffer + offset, mBytesRemaining);
    ...
}

AudioMixer的process是一個hook函數,根據不同的情況會調用不同的函數。具體的調用會調用到AudioMixer中以process開頭的一組函數。如

  • void AudioMixer::process__validate(state_t* state, int64_t pts)
  • void AudioMixer::process__nop(state_t* state, int64_t pts)
  • void AudioMixer::process__genericNoResampling(state_t* state, int64_t pts)
  • void AudioMixer::process__genericResampling(state_t* state, int64_t pts)
  • void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state,int64_t pts)
  • void AudioMixer::process__OneTrack24BitsStereoNoResampling(state_t* state,int64_t pts)
  • void AudioMixer::process_NoResampleOneTrack(state_t* state, int64_t pts)

這裏以process__OneTrack16BitsStereoNoResampling爲例子,在獲取buffer的時候使用了Track的getNextBuffer和releaseBuffer。

//--->frameworks/av/services/audioflinger/AudioMixer.cpp
void AudioMixer::process(int64_t pts)
{
    mState.hook(&mState, pts);
}

void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state,
                                                           int64_t pts)
{
    const int i = 31 - __builtin_clz(state->enabledTracks);
    const track_t& t = state->tracks[i];
    AudioBufferProvider::Buffer& b(t.buffer);
    ...(音量設置)
    while (numFrames) {
        ...(Frame的計算)
        t.bufferProvider->getNextBuffer(&b, outputPTS);     // 獲取客戶端寫入的buffer
        ...(錯誤處理)

        // 混音
        switch (t.mMixerFormat) {
            case AUDIO_FORMAT_PCM_FLOAT:  ... break;
            case AUDIO_FORMAT_PCM_16_BIT: ... break;
            default:LOG_ALWAYS_FATAL("bad mixer format: %d", t.mMixerFormat);
        }
        ...
        t.bufferProvider->releaseBuffer(&b);
}

其他問題

這裏只是簡單的過了一下AudioTrack播放音頻的流程,其他的地方還有很多東西要看。

  • 共享內存的同步(AudioTrackClientProxy的obtainBuffer和releaseBuffer和bufferProvider的getNextBuffer和releaseBuffer);
  • mOutput->stream->write最終的去向。
  • AudioPolicyService的一些音頻策略
  • 上層的Mediaplayer的一些工作
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章