源碼地址:https://github.com/yellowgreatsun/MXTtsEngine
前兩篇文章即Android TTS系列一——如何讓app具備tts能力和Android TTS系列二——如何開(kāi)發(fā)一款系統(tǒng)級(jí)tts引擎?分別分享了如何讓app具備tts能力和如何開(kāi)發(fā)tts引擎,這篇文章會(huì)分享下speech包源碼,我們會(huì)更明白speech包接口的來(lái)龍去脈。這部分之前在公司內(nèi)部分享講過(guò)一次,但發(fā)現(xiàn)寫成文章還是很吃力的,我會(huì)盡量講解清楚。另外,引用的代碼我會(huì)略去一些不太關(guān)鍵的部分,避免文章太冗長(zhǎng)。
為便于理解,我按照這個(gè)思路來(lái)進(jìn)行分享:
- speech包的源碼結(jié)構(gòu)
- TextToSpeech的接口剖析
- TextToSpeech的接口如何通過(guò)TextToSpeechService來(lái)實(shí)現(xiàn)
- TextToSpeechService的實(shí)現(xiàn)如何回調(diào)回去
一、speech包的源碼結(jié)構(gòu)
speech源碼路徑:/frameworks/base/core/java/android/speech
先來(lái)看一下speech包的源碼結(jié)構(gòu):
這里面,比較關(guān)鍵的有:TextToSpeech、TextToSpeechService、ITextToSpeechService、ITextToSpeechCallback、SynthesisCallback及其實(shí)現(xiàn)類(PlaybackSynthesisCallback和FileSynthesisCallback)。前兩個(gè)小伙伴們已經(jīng)不陌生了,一個(gè)是調(diào)用tts的接口,一個(gè)是實(shí)現(xiàn)tts的能力,后幾個(gè)則是將它倆關(guān)聯(lián)起來(lái)。
二、TextToSpeech的接口剖析
這里,我們只對(duì)構(gòu)造方法、speck、synthesizeToFile進(jìn)行剖析,其它的諸如setLanguage、getEngines,看了這篇分享后,會(huì)很容易理解的,所以不再寫出來(lái)。
1.構(gòu)造方法
構(gòu)造方法有三個(gè):
- TextToSpeech(Context context, OnInitListener listener)
- TextToSpeech(Context context, OnInitListener listener, String engine)
- TextToSpeech(Context context, OnInitListener listener, String engine,String packageName, boolean useFallback)
看代碼會(huì)發(fā)現(xiàn),前兩個(gè)最終還是調(diào)用了第三個(gè)。
public TextToSpeech(Context context, OnInitListener listener, String engine,
String packageName, boolean useFallback) {
mContext = context;
mInitListener = listener;
mRequestedEngine = engine;
mUseFallback = useFallback;
initTts();
}
前面一些賦值容易明白,關(guān)鍵信息在initTts()里。
private int initTts() {
// Step 1: Try connecting to the engine that was requested.
if (connectToEngine(mRequestedEngine)) {
mCurrentEngine = mRequestedEngine;
return SUCCESS;
}
// Step 2: Try connecting to the user's default engine.
final String defaultEngine = getDefaultEngine();
if (defaultEngine != null && !defaultEngine.equals(mRequestedEngine)) {
if (connectToEngine(defaultEngine)) {
mCurrentEngine = defaultEngine;
return SUCCESS;
}
}
// Step 3: Try connecting to the highest ranked engine in the
// system.
final String highestRanked = mEnginesHelper.getHighestRankedEngineName();
if (highestRanked != null && !highestRanked.equals(mRequestedEngine) &&
!highestRanked.equals(defaultEngine)) {
if (connectToEngine(highestRanked)) {
mCurrentEngine = highestRanked;
return SUCCESS;
}
}
mCurrentEngine = null;
dispatchOnInit(ERROR);
return ERROR;
}
看注釋也能明白,initTts會(huì)按照這樣一個(gè)順序:指定的tts引擎(通過(guò)包名,構(gòu)造方法中的參數(shù)engine)、默認(rèn)的tts引擎、最常用的tts引擎。但這些對(duì)于我們理解speech源碼不是最關(guān)鍵的,關(guān)鍵的在于connectToEngine(engine),有沒(méi)有發(fā)現(xiàn),三種創(chuàng)建引擎的方式最終都調(diào)它了。如果connectToEngine返回false了,也就通過(guò) dispatchOnInit(ERROR)告訴調(diào)用者失敗了。那就看下connectToEngine吧。
private boolean connectToEngine(String engine) {
Connection connection = new Connection();
Intent intent = new Intent(Engine.INTENT_ACTION_TTS_SERVICE);
intent.setPackage(engine);
boolean bound = mContext.bindService(intent, connection, Context.BIND_AUTO_CREATE);
if (!bound) {
return false;
} else {
mConnectingServiceConnection = connection;
return true;
}
}
到這里,我們就明白了,哦,原來(lái)是bind一個(gè)Service呀。指定了action為Engine.INTENT_ACTION_TTS_SERVICE,這就是在第二篇文章中,TextToSpeechService的實(shí)現(xiàn)類在mainfest中聲明該action的原因。Connection太長(zhǎng),只看onServiceConnected吧。
@Override
public void onServiceConnected(ComponentName name, IBinder service) {
synchronized(mStartLock) {
mService = ITextToSpeechService.Stub.asInterface(service);
mServiceConnection = Connection.this;
mOnSetupConnectionAsyncTask = new SetupConnectionAsyncTask(name);
mOnSetupConnectionAsyncTask.execute();
}
}
在這里,拿到了mService的實(shí)例,bind的是遠(yuǎn)程Service,TextToSpeech的其它接口,也是再調(diào)用 ITextToSpeechService下的接口。然后通過(guò)mOnSetupConnectionAsyncTask中的dispatchOnInit(result)來(lái)回調(diào)初始化成功。
構(gòu)造方法就先說(shuō)到這里。
2.speak(text,queueMode,params,utteranceId)
看代碼會(huì)發(fā)現(xiàn),最終調(diào)用了service.speak(getCallerIdentity(), text, queueMode, getParams(params),utteranceId)。明白這一點(diǎn)就先可以了。
3.synthesizeToFile(text,params,file,utteranceId)
類似于speak,synthesizeToFile最終調(diào)用的service.synthesizeToFileDescriptor(getCallerIdentity(), text,fileDescriptor, getParams(params), utteranceId)。
接下來(lái),我們將進(jìn)入TextToSpeechService,去探究下service.speak和service.synthesizeToFileDescriptor。
三、TextToSpeech的接口如何通過(guò)TextToSpeechService來(lái)實(shí)現(xiàn)
這部分比較長(zhǎng),我們一點(diǎn)點(diǎn)來(lái)看。
1.mBinder
通過(guò)對(duì)TextToSpeech構(gòu)造方法的剖析,我們了解了它是通過(guò)綁定遠(yuǎn)程Service來(lái)實(shí)現(xiàn)的,而遠(yuǎn)程Service(也就是TTSEngine中的Service,比如我們的MoxiangTtsService)是繼承的TextToSpeechService,所以重點(diǎn)看它就行了。首先要看下它的onBind。
@Override
public IBinder onBind(Intent intent) {
if (Engine.INTENT_ACTION_TTS_SERVICE.equals(intent.getAction())) {
return mBinder;
}
return null;
}
mBinder 的實(shí)例化非常長(zhǎng),畢竟要把ITextToSpeechService的各個(gè)接口實(shí)現(xiàn)完。這里只貼出來(lái)我們關(guān)心的兩個(gè),即speak和synthesizeToFileDescriptor。
private final ITextToSpeechService.Stub mBinder = new ITextToSpeechService.Stub() {
@Override
public int speak(IBinder caller, CharSequence text, int queueMode, Bundle params,
String utteranceId) {
SpeechItem item = new SynthesisSpeechItemV1(caller,
Binder.getCallingUid(), Binder.getCallingPid(), params, utteranceId, text);
return mSynthHandler.enqueueSpeechItem(queueMode, item);
}
@Override
public int synthesizeToFileDescriptor(IBinder caller, CharSequence text, ParcelFileDescriptor
fileDescriptor, Bundle params, String utteranceId) {
final ParcelFileDescriptor sameFileDescriptor = ParcelFileDescriptor.adoptFd(
fileDescriptor.detachFd());
SpeechItem item = new SynthesisToFileOutputStreamSpeechItemV1(caller,
Binder.getCallingUid(), Binder.getCallingPid(), params, utteranceId, text,
new ParcelFileDescriptor.AutoCloseOutputStream(sameFileDescriptor));
return mSynthHandler.enqueueSpeechItem(android.speech.tts.TextToSpeech.QUEUE_ADD, item);
}
……
}
這樣,一下子就明白了,speak的實(shí)現(xiàn)又調(diào)了mSynthHandler.enqueueSpeechItem,mSynthHandler就是一個(gè)Handler,字面理解該接口,就是將text的語(yǔ)音合成加入了隊(duì)列。synthesizeToFileDescriptor的實(shí)現(xiàn)竟也是調(diào)了mSynthHandler.enqueueSpeechItem,當(dāng)然了,參數(shù)item不同,奧秘,這里有奧秘。
2.speak
上面已經(jīng)說(shuō)了,其實(shí)現(xiàn)是調(diào)mSynthHandler.enqueueSpeechItem,來(lái),繼續(xù)。
public int enqueueSpeechItem(int queueMode, final SpeechItem speechItem) {
UtteranceProgressDispatcher utterenceProgress = null;
if (queueMode == android.speech.tts.TextToSpeech.QUEUE_FLUSH) {
stopForApp(speechItem.getCallerIdentity());
} else if (queueMode == android.speech.tts.TextToSpeech.QUEUE_DESTROY) {
stopAll();
}
Runnable runnable = new Runnable() {
@Override
public void run() {
if (isFlushed(speechItem)) {
speechItem.stop();
} else {
setCurrentSpeechItem(speechItem);
speechItem.play();
setCurrentSpeechItem(null);
}
}
};
Message msg = Message.obtain(this, runnable);
msg.obj = speechItem.getCallerIdentity();
if (sendMessage(msg)) {
return android.speech.tts.TextToSpeech.SUCCESS;
} else {
if (utterenceProgress != null) {
utterenceProgress.dispatchOnError(android.speech.tts.TextToSpeech.ERROR_SERVICE);
}
return android.speech.tts.TextToSpeech.ERROR;
}
}
先看前面的,TextToSpeech的speak接口的參數(shù)mode在這里體現(xiàn)了,如果是TextToSpeech.QUEUE_FLUSH,會(huì)調(diào)用stopForApp停掉其它tts的播放,這里就不把代碼貼出來(lái)了。我們關(guān)注的是speechItem.play()。
public void play() {
playImpl();
}
playImpl,要看SynthesisSpeechItemV1下的實(shí)現(xiàn)了。
@Override
protected void playImpl() {
AbstractSynthesisCallback synthesisCallback;
synchronized (this) {
mSynthesisCallback = createSynthesisCallback();
synthesisCallback = mSynthesisCallback;
}
TextToSpeechService.this.onSynthesizeText(mSynthesisRequest, synthesisCallback);
// Fix for case where client called .start() & .error(), but did not called .done()
if (synthesisCallback.hasStarted() && !synthesisCallback.hasFinished()) {
synthesisCallback.done();
}
}
哎呦,終于看到了我們熟悉的onSynthesizeText(SynthesisRequest request,SynthesisCallback callback)。reques就是封裝的tts合成請(qǐng)求的參數(shù),在SynthesisSpeechItemV1的構(gòu)造方法中封裝的,callback是調(diào)用createSynthesisCallback創(chuàng)建的。
protected AbstractSynthesisCallback createSynthesisCallback() {
return new PlaybackSynthesisCallback(getAudioParams(),
mAudioPlaybackHandler, this, getCallerIdentity(), mEventLogger, false);
}
3.synthesizeToFileDescriptor
上面我們已經(jīng)說(shuō)了,synthesizeToFileDescriptor和speak的區(qū)別,就在于SpeechItemV1的子類不同。speak是SynthesisSpeechItemV1,synthesizeToFileDescriptor是SynthesisToFileOutputStreamSpeechItemV1。二者到了playImpl的區(qū)別,也主要在于callback的不同,其創(chuàng)建方法是:
protected AbstractSynthesisCallback createSynthesisCallback() {
return new FileSynthesisCallback(mFileOutputStream.getChannel(), this, false);
}
到了這里,我們已經(jīng)明白了,TextToSpeech的接口是如何通過(guò)TextToSpeechService來(lái)實(shí)現(xiàn)。接下來(lái),我們?cè)撽P(guān)心如何將TextToSpeechService的實(shí)現(xiàn)回調(diào)回去。
四、TextToSpeechService的實(shí)現(xiàn)如何回調(diào)回去
1.speak
在第二篇時(shí),我們已經(jīng)講解過(guò)要通過(guò)調(diào)用callback的各個(gè)接口來(lái)回傳結(jié)果。這里,就看一下PlaybackSynthesisCallback中的start、audioAvailable和done。
關(guān)注幾個(gè)實(shí)例:UtteranceProgressDispatcher mDispatcher、AudioPlaybackHandler mAudioTrackHandler和SynthesisPlaybackQueueItem mItem,它們均是在TextToSpeech中實(shí)例化的。第一個(gè)關(guān)系到最終到UtteranceProgressListener的回調(diào),后兩個(gè)關(guān)系到音頻的播放。
貼出來(lái)三個(gè)方法:
public int start(int sampleRateInHz, int audioFormat, int channelCount) {
SynthesisPlaybackQueueItem item = new SynthesisPlaybackQueueItem(
mAudioParams, sampleRateInHz, audioFormat, channelCount,
mDispatcher, mCallerIdentity, mLogger);
mAudioTrackHandler.enqueue(item);
mItem = item;
}
public int audioAvailable(byte[] buffer, int offset, int length) {
item = mItem;
final byte[] bufferCopy = new byte[length];
item.put(bufferCopy);
}
public int done() {
if (mStatusCode == android.speech.tts.TextToSpeech.SUCCESS) {
mDispatcher.dispatchOnSuccess();
} else {
mDispatcher.dispatchOnError(mStatusCode);
}
item = mItem;
if (statusCode == android.speech.tts.TextToSpeech.SUCCESS) {
item.done();
} else {
item.stop(statusCode);
}
}
先說(shuō)回調(diào),可以在PlaybackSynthesisCallback的done中看到,調(diào)用了mDispatcher.dispatchOnSuccess,進(jìn)一步看是調(diào)用了ITextToSpeechCallback.onSuccess,進(jìn)一步掉用了UtteranceProgressListener的onDone。其它也是也是如此,不再細(xì)說(shuō)。注意的是,onStart不是直接在PlaybackSynthesisCallback中,而是在SynthesisPlaybackQueueItem中,畢竟這時(shí)候才是開(kāi)始播放。
再說(shuō)播放,AudioPlaybackHandler 雖然不是繼承自Handler,但其作用與其類似,是管理各個(gè)tts合成請(qǐng)求是的隊(duì)列。SynthesisPlaybackQueueItem 繼承自Runnable的子類,播放是在這完成的,它會(huì)創(chuàng)建一個(gè)BlockingAudioTrack,具體播放不再敘述。
2.synthesizeToFile
它的callback是FileSynthesisCallback。
大體與PlaybackSynthesisCallback類似,只是它不是播放,而是寫入指定的文件。這里面需要關(guān)注的實(shí)例有:UtteranceProgressDispatcher mDispatcher和FileChannel mFileChannel,后者就是寫文件的。
貼出來(lái)三個(gè)方法:
public int start(int sampleRateInHz, int audioFormat, int channelCount) {
mDispatcher.dispatchOnStart();
fileChannel.write(ByteBuffer.allocate(WAV_HEADER_LENGTH));
}
public int audioAvailable(byte[] buffer, int offset, int length) {
fileChannel.write(ByteBuffer.wrap(buffer, offset, length));
}
public int done() {
fileChannel.position(0);
int dataLength = (int) (fileChannel.size() - WAV_HEADER_LENGTH);
fileChannel.write(makeWavHeader(sampleRateInHz, audioFormat, channelCount, dataLength));
mDispatcher.dispatchOnSuccess();
}
需要注意的是done中的makeWavHeader,它是寫入文件頭信息。
OK,Android speech包源碼剖析就講到這里。至此,Android Tts系列就基本完成了,雖然還有一些細(xì)節(jié)沒(méi)有講到,比如Locale語(yǔ)言、設(shè)置默認(rèn)引擎,但主體思路算是差不多了。
在這個(gè)過(guò)程中,一是對(duì)Android Tts的來(lái)龍去脈有了比較充分的認(rèn)識(shí);二是Android speech包的架構(gòu)設(shè)計(jì)思想,甭管具體的Tts引擎是啥,你都得按照我的規(guī)范來(lái)(集成Service、實(shí)現(xiàn)接口),這樣,都可以通過(guò)TextToSpeech的接口來(lái)調(diào)用。