AudioSystem在audio框架中的關係和位置如圖所示,
AudioSystem提供native接口,通過jni訪問native提供的audio功能,在native層,有一個對應的AudioSystem.cpp文件
Jave層主要功能清單可見代碼。
這裏定義了音頻流的類型、輸入輸出器件類型等。
對於比較複雜的,如音頻輸入輸出器件的名稱如下,
輸出設備: 1. DEVICE_OUT_EARPIECE : 聽筒 2. 3. DEVICE_OUT_SPEAKER : 揚聲器 4. 5. DEVICE_OUT_WIRED_HEADSET : 帶話筒的耳機 6. DEVICE_OUT_WIRED_HEADPHONE : 不帶話筒的耳機 7. 8. DEVICE_OUT_BLUETOOTH_SCO : 藍牙.面向連接(SCO)方式:主要用於話音傳輸 9. DEVICE_OUT_BLUETOOTH_SCO_HEADSET : 藍牙耳機,帶話筒 10. DEVICE_OUT_BLUETOOTH_SCO_CARKIT : 藍牙車載設備 11. DEVICE_OUT_BLUETOOTH_A2DP : 藍牙立體聲 12. DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES: 藍牙立體聲音耳機 13. DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER : 帶話筒的 14. 15. DEVICE_OUT_AUX_DIGITAL : 16. DEVICE_OUT_ANLG_DOCK_HEADSET : 通過基座連接的模擬有線耳機 17. DEVICE_OUT_DGTL_DOCK_HEADSET : 通過基座連接的數字有線耳機 18. DEVICE_OUT_FM_HEADPHONE : FM 耳機 19. DEVICE_OUT_FM_SPEAKER :FM 揚聲器 20. DEVICE_OUT_SPEAKER_SSPA2 21. DEVICE_OUT_HDMI : HDMI接口 22. DEVICE_OUT_FM_TRANSMITTER
輸入設備 1. DEVICE_IN_COMMUNICATION : 手機上的話筒 2. DEVICE_IN_AMBIENT : 3. DEVICE_IN_BUILTIN_MIC : 藍牙麥克 4. DEVICE_IN_BLUETOOTH_SCO_HEADSET : 藍牙耳機上的話筒 5. DEVICE_IN_WIRED_HEADSET : 有線耳機上的話筒 6. DEVICE_IN_AUX_DIGITAL : 7. DEVICE_IN_VOICE_CALL : 通話中 8. DEVICE_IN_BACK_MIC : 9. DEVICE_IN_VT_MIC : 10. DEVICE_IN_FMRADIO : FM的輸入
|
在native層,AudioSystem主要和AF、APS交互,通過下面幾個對象、client和回調,完成向上向下的連接,
sp<IAudioFlinger> AudioSystem::gAudioFlinger; sp<AudioSystem::AudioFlingerClient> AudioSystem::gAudioFlingerClient; audio_error_callback AudioSystem::gAudioErrorCallback = NULL; dynamic_policy_callback AudioSystem::gDynPolicyCallback = NULL; record_config_callback AudioSystem::gRecordConfigCallback = NULL;
sp<IAudioPolicyService> AudioSystem::gAudioPolicyService; sp<AudioSystem::AudioPolicyServiceClient> AudioSystem::gAudioPolicyServiceClient; |
通過get_audio_flinger,獲取AF的服務引用,並創建AFC,促發回調,之後便可以通過af的引用使用AF的服務了,對於AF向上的通知,可以使用client實現,AS的向上通知使用callback。
const sp<IAudioFlinger> AudioSystem::get_audio_flinger() { sp<IAudioFlinger> af; sp<AudioFlingerClient> afc; { Mutex::Autolock _l(gLock); if (gAudioFlinger == 0) { sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder; do { binder = sm->getService(String16("media.audio_flinger")); if (binder != 0) break; ALOGW("AudioFlinger not published, waiting..."); usleep(500000); // 0.5 s } while (true); if (gAudioFlingerClient == NULL) { gAudioFlingerClient = new AudioFlingerClient(); } else { if (gAudioErrorCallback) { gAudioErrorCallback(NO_ERROR); } } binder->linkToDeath(gAudioFlingerClient); gAudioFlinger = interface_cast<IAudioFlinger>(binder); LOG_ALWAYS_FATAL_IF(gAudioFlinger == 0); afc = gAudioFlingerClient; } af = gAudioFlinger; } if (afc != 0) { af->registerClient(afc); } return af; } |
通過get_audio_policy_service,獲取APS的服務引用,並創建APSC,之後便可以通過af的引用使用APS的服務了,對於APS向上的通知,可以使用client實現。
const sp<IAudioPolicyService> AudioSystem::get_audio_policy_service() { sp<IAudioPolicyService> ap; sp<AudioPolicyServiceClient> apc; { Mutex::Autolock _l(gLockAPS); if (gAudioPolicyService == 0) { sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder; do { binder = sm->getService(String16("media.audio_policy")); if (binder != 0) break; ALOGW("AudioPolicyService not published, waiting..."); usleep(500000); // 0.5 s } while (true); if (gAudioPolicyServiceClient == NULL) { gAudioPolicyServiceClient = new AudioPolicyServiceClient(); } binder->linkToDeath(gAudioPolicyServiceClient); gAudioPolicyService = interface_cast<IAudioPolicyService>(binder); LOG_ALWAYS_FATAL_IF(gAudioPolicyService == 0); apc = gAudioPolicyServiceClient; } ap = gAudioPolicyService; } if (apc != 0) { ap->registerClient(apc); }
return ap; } |
AudioFlingerClient和AudioPolicyServiceClient的定義都在AudioSystem.h,實現在AudioSystem.c裏面。
對於AudioFlingerClient,首先是向AF註冊,AF內創建NotificationClient對象,添加到其內部隊列,最後AF的player/recorder線程向其客戶端發送通道打開事件。
void AudioFlinger::registerClient(const sp<IAudioFlingerClient>& client) { Mutex::Autolock _l(mLock); if (client == 0) { return; } pid_t pid = IPCThreadState::self()->getCallingPid(); { Mutex::Autolock _cl(mClientLock); if (mNotificationClients.indexOfKey(pid) < 0) { sp<NotificationClient> notificationClient = new NotificationClient(this, client, pid); ALOGV("registerClient() client %p, pid %d", notificationClient.get(), pid);
mNotificationClients.add(pid, notificationClient);
sp<IBinder> binder = IInterface::asBinder(client); binder->linkToDeath(notificationClient); } }
for (size_t i = 0; i < mPlaybackThreads.size(); i++) { mPlaybackThreads.valueAt(i)->sendIoConfigEvent(AUDIO_OUTPUT_OPENED, pid); }
for (size_t i = 0; i < mRecordThreads.size(); i++) { mRecordThreads.valueAt(i)->sendIoConfigEvent(AUDIO_INPUT_OPENED, pid); } } |
事件上,AF通過AFC的方式向AS傳遞事件使用得並不多,這可以通過mNotificationClients和NotificationClient. mAudioFlingerClient看出,所以這並不是一個主要流程。
AudioSystem的setErrorCallback的簡單分析,
應用通過setErrorCallback,向java註冊自己的回調函數,
private final CameraErrorCallback mCameraErrorCallback = new CameraErrorCallback();
private final class CameraErrorCallback implements android.hardware.Camera.ErrorCallback { public void onError(int error, android.hardware.Camera camera) { Assert.fail(String.format("Camera error, code: %d", error)); } } |
Java framework裏,errorCallbackFromNative將native的狀態回調轉給應用的監聽回調,根據java的邏輯,每個應用進入時必須設置自己的回調,退出可以清空,不保證在後臺可以適合回調,因爲這裏不支持多用戶,只支持最新設置的回調。
public static void setErrorCallback(ErrorCallback cb) { synchronized (AudioSystem.class) { mErrorCallback = cb; if (cb != null) { cb.onError(checkAudioFlinger()); } } } |
private static void errorCallbackFromNative(int error) { ErrorCallback errorCallback = null; synchronized (AudioSystem.class) { if (mErrorCallback != null) { errorCallback = mErrorCallback; } } if (errorCallback != null) { errorCallback.onError(error); } } |
在jni層,android_media_AudioSystem_error_callback調用java的errorCallbackFromNative,
static void android_media_AudioSystem_error_callback(status_t err) { JNIEnv *env = AndroidRuntime::getJNIEnv(); if (env == NULL) { return; }
jclass clazz = env->FindClass(kClassPathName);
env->CallStaticVoidMethod(clazz, env->GetStaticMethodID(clazz, "errorCallbackFromNative","(I)V"), check_AudioSystem_Command(err));
env->DeleteLocalRef(clazz); } |
向native註冊
AudioSystem::setErrorCallback(android_media_AudioSystem_error_callback); |
Native層,AudioSystem.cpp裏,gAudioErrorCallback被設置和回調的地方如下,可見該回調目前只在AF服務獲取成功和AFC解除綁定纔會使用,即當前這個回調實際並沒什麼用。
/* static */ void AudioSystem::setErrorCallback(audio_error_callback cb) { Mutex::Autolock _l(gLock); gAudioErrorCallback = cb; } |
const sp<IAudioFlinger> AudioSystem::get_audio_flinger() { … if (gAudioErrorCallback) { gAudioErrorCallback(NO_ERROR); } |
void AudioSystem::AudioFlingerClient::binderDied(const wp<IBinder>& who __unused) { cb = gAudioErrorCallback; … if (cb) { cb(DEAD_OBJECT); } … |
AudioManager類提供audio相關的控制接口,包括音量設置、通道控制、音頻焦點控制、音效設置、參數設置、振鈴模式等等。使用Context.getSystemService(Context.AUDIO_SERVICE)來得到這個類的一個實例。
在應用側,如下,只需獲取服務實例,就可以使用服務接口了。
private AudioManager aManger; aManger = (AudioManager) getSystemService(Service.AUDIO_SERVICE);
aManger.adjustStreamVolume(AudioManager.STREAM_MUSIC, AudioManager.ADJUST_LOWER, AudioManager.FLAG_SHOW_UI);
|
AudioManager只是一個功能的接口類和封裝類,其功能主要是由AudioService和AudioSystem完成的,通過IAudioService sService和AudioSystem 調用功能的方法。
private static IAudioService sService;
private static IAudioService getService() { if (sService != null) { return sService; } IBinder b = ServiceManager.getService(Context.AUDIO_SERVICE); sService = IAudioService.Stub.asInterface(b); return sService; }
public boolean isMicrophoneMute() { return AudioSystem.isMicrophoneMuted(); } |
AudioService是java層的Audio服務類,在SystemServer裏被啓動,之後就可以被應用使用。
private void startOtherServices() { … traceBeginAndSlog("StartAudioService"); mSystemServiceManager.startService(AudioService.Lifecycle.class); Trace.traceEnd(Trace.TRACE_TAG_SYSTEM_SERVER);
|
AudioService extendsIAudioService.Stub,它內部有handler、callback、receiver來維護整個服務的運行,再借助AudioSystem、BT、Soundpoll、mediaplayer、mediaRecorder等中間類,實現audio的功能。
public class AudioService extends IAudioService.Stub { /** @see AudioSystemThread */ private AudioSystemThread mAudioSystemThread; /** @see AudioHandler */ private AudioHandler mAudioHandler;
private final BroadcastReceiver mReceiver = new AudioServiceBroadcastReceiver();
// BluetoothHeadset API to control SCO connection private BluetoothHeadset mBluetoothHeadset;
// Bluetooth headset device private BluetoothDevice mBluetoothHeadsetDevice;
private volatile IRingtonePlayer mRingtonePlayer;
private final MediaFocusControl mMediaFocusControl;
private SoundPool mSoundPool;
|
要清楚的幾個概念,
Mode:設置電話音頻狀態的模式
/* modes for setPhoneState, must match AudioSystem.h audio_mode */ public static final int MODE_INVALID = -2; public static final int MODE_CURRENT = -1; public static final int MODE_NORMAL = 0; public static final int MODE_RINGTONE = 1; public static final int MODE_IN_CALL = 2; public static final int MODE_IN_COMMUNICATION = 3; public static final int NUM_MODES = 4; |
音量:分不同stream類型設置音量大小
/* The default audio stream */ public static final int STREAM_DEFAULT = -1; /* The audio stream for phone calls */ public static final int STREAM_VOICE_CALL = 0; /* The audio stream for system sounds */ public static final int STREAM_SYSTEM = 1; /* The audio stream for the phone ring and message alerts */ public static final int STREAM_RING = 2; /* The audio stream for music playback */ public static final int STREAM_MUSIC = 3; /* The audio stream for alarms */ public static final int STREAM_ALARM = 4; /* The audio stream for notifications */ public static final int STREAM_NOTIFICATION = 5; /* @hide The audio stream for phone calls when connected on bluetooth */ public static final int STREAM_BLUETOOTH_SCO = 6; /* @hide The audio stream for enforced system sounds in certain countries (e.g camera in Japan) */ public static final int STREAM_SYSTEM_ENFORCED = 7; /* @hide The audio stream for DTMF tones */ public static final int STREAM_DTMF = 8; /* @hide The audio stream for text to speech (TTS) */ public static final int STREAM_TTS = 9; |
這兩個類具體的代碼分析會穿插在其他的業務流程裏面。