本文為本人原創,轉載請注明作者和出處。
在上一章我們分析了Okhttp分發器對同步/異步請求的處理,本章將和大家一起分析Okhttp的最核心模塊--攔截鏈的代碼。在這里你將會了解到Okhttp究竟如何處理請求的。
系列文章索引:
OkHttp源碼學習系列一:總流程和Dispatcher分析
OkHttp源碼學習系列二:攔截鏈分析
OkHttp源碼學習系列三:緩存總結
零、攔截鏈總流程
上一章我們分析到了RealCall的getResponseWithInterceptorChain方法,現在我們繼續,先來看下這個方法:
Response getResponseWithInterceptorChain() throws IOException {
// Build a full stack of interceptors.
List<Interceptor> interceptors = new ArrayList<>();
interceptors.addAll(client.interceptors());
interceptors.add(retryAndFollowUpInterceptor);
interceptors.add(new BridgeInterceptor(client.cookieJar()));
interceptors.add(new CacheInterceptor(client.internalCache()));
interceptors.add(new ConnectInterceptor(client));
if (!forWebSocket) {
interceptors.addAll(client.networkInterceptors());
}
interceptors.add(new CallServerInterceptor(forWebSocket));
Interceptor.Chain chain = new RealInterceptorChain(
interceptors, null, null, null, 0, originalRequest);
return chain.proceed(originalRequest);
}
可以看到這個方法前面一大堆都很簡單,new了一個存放攔截器的數組。這個Interceptor是接口,后面添加的各個元素都是它的實現類。這里先添加了okhttpClient對象里的攔截器,這里是用戶自定義的攔截器,全都會在這里添加進去。然后一次添加retryAndFollowUpInterceptor,BridgeInterceptor,CacheInterceptor,ConnectInterceptor,CallServerInterceptor五個攔截器,其中在倒數第二的位置上,如果上websocket連接,還會額外添加networkInterceptor攔截器組。這里的添加順序要注意,后面我們會看到,攔截器的攔截順序也是和添加順序是一致的。
添加完成后,執行了RealInterceptorChain也就是攔截器鏈的proceed方法,傳入原始request。其實攔截器鏈我們看名字大致能猜到,它會將各個攔截器串起來,像鏈條一樣依次去攔截或者說加工request,最終得到response后,依次再向上返回,各個攔截器再依次處理response,最終返回給用戶。現在我們來看下proceed方法是如何串起各個攔截器的:
// Call the next interceptor in the chain.
RealInterceptorChain next = new RealInterceptorChain(
interceptors, streamAllocation, httpCodec, connection, index + 1, request);
Interceptor interceptor = interceptors.get(index);
Response response = interceptor.intercept(next);
源代碼有點長,這里我們略去各種異常判斷,只看核心的這段代碼。看源碼的時候一定要記得抓住主脈絡,因為我們不可能一次弄明白或記住所有的源碼細節。抓住重點,才不會只見樹木,不見森林。
在這里它又重復創建了一個RealInterceptorChain對象,只是對index做了+1處理。這個index就是記錄interceptors攔截器數組的角標的。之后會從數組中取出當前的攔截器,調用intercept方法并傳入這個新new出來的RealInterceptorChain,得到response并返回。
看到這里可能有點懵,直覺告訴我們應該只要一個RealInterceptorChain對象去管理所有攔截器,然而這里似乎沒攔截一次都會產生一個新的RealInterceptorChain,只不過把攔截器數組的角標index往后移了一位。不要著急,我們看了這里關鍵的攔截方法intercept就可以知道是怎么一回事了。這是個抽象方法,它的所有實現,都會調用chain.proceed(request)這句代碼,包括我們自己自定義的攔截器,在intercept方法中我們也必須調用這句方法才能生效。在后面的貼出來的代碼中大家也都會看到這句代碼。這個方法仿佛之前見過?是的沒錯,在getResponseWithInterceptorChain中我們第一次new攔截器鏈對象就調用了該方法。這里又調用了該方法,只是這里的攔截器對象是我們新new出來的,index+1的攔截器鏈。于是乎在這次調用中,又會new出一個新的攔截器鏈對象,并會根據index取出下一個攔截器,執行該攔截器的intercept方法。如此遞歸調用,直至攔截器數組中沒有下一個攔截器,得到response并向上傳遞。
可以看到這里的攔截鏈使用的非常巧妙,有點像棧的數據結構。依次將各個攔截器的方法入棧,最后得到response,再依次彈棧。如果是我來寫的話,可能就直接一個for循環依次調用每個攔截器的攔截方法。但是這樣的話我還得再來一遍反循環,再來依次處理加工response。很明顯這里棧的結構更符合我們的業務場景。
一、RetryAndFollowUpInterceptor
下面我們來看第一個攔截器RetryAndFollowUpInterceptor,看名字可以猜到這個攔截器是負責失敗重連和重定向的攔截器。
直接來看它的intercept方法:
@Override public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
streamAllocation = new StreamAllocation(
client.connectionPool(), createAddress(request.url()), callStackTrace);
int followUpCount = 0;
Response priorResponse = null;
while (true) {
if (canceled) {
streamAllocation.release();
throw new IOException("Canceled");
}
Response response = null;
boolean releaseConnection = true;
try {
response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null);
releaseConnection = false;
} catch (RouteException e) {
// The attempt to connect via a route failed. The request will not have been sent.
if (!recover(e.getLastConnectException(), false, request)) {
throw e.getLastConnectException();
}
releaseConnection = false;
continue;
} catch (IOException e) {
// An attempt to communicate with a server failed. The request may have been sent.
boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
if (!recover(e, requestSendStarted, request)) throw e;
releaseConnection = false;
continue;
} finally {
// We're throwing an unchecked exception. Release any resources.
if (releaseConnection) {
streamAllocation.streamFailed(null);
streamAllocation.release();
}
}
// Attach the prior response if it exists. Such responses never have a body.
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build();
}
Request followUp = followUpRequest(response);
if (followUp == null) {
if (!forWebSocket) {
streamAllocation.release();
}
return response;
}
closeQuietly(response.body());
if (++followUpCount > MAX_FOLLOW_UPS) {
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}
if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}
if (!sameConnection(response, followUp.url())) {
streamAllocation.release();
streamAllocation = new StreamAllocation(
client.connectionPool(), createAddress(followUp.url()), callStackTrace);
} else if (streamAllocation.codec() != null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?");
}
request = followUp;
priorResponse = response;
}
}
這里的代碼非常長,但是又不太好精簡我就都貼上來了。不要害怕,還是比較容易理順的。首先它new了一個StreamAllocation對象,這個對象封裝了RealConnection、RouteSelector、HttpCodec等http連接要用到的重要對象,有興趣的同學可以看看這部分源碼,這里不展開講它,只要知道這是個http連接必須要用到的重要資源對象。
之后定義了followUpCount也就是重連次數,以及一個空的response。緊接著就是一個while(true)死循環。這個死循環不用看也能想到,肯定是不斷的進行重連/重定向的。核心代碼是這句response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null),是的,所謂的重連就是重復調用下一個攔截器繼續攔截。這里和第一次在RealCall中的調用不一樣的地方是傳入了streamAllocation給下一個攔截器鏈,之前傳的是null。
那么這個死循環如何結束呢?這里除了獲得response后的且沒有重定向得return,好像沒找到其他退出循環的地方。是不是沒有response之前會一直重復請求直到有結果呢?這明顯不肯能,這里是通過拋異常的方式來結束重連。我們來看拋異常的地方有哪些:
- 當前請求被cancel的時候
- 重連次數超過上限(默認是20,用戶可修改)
- 重定向的請求體屬于不能重復的請求體
- 沒有正確釋放streamAllocation內的codec
除了最后一條,其他時候都會調用streamAllocation.release()來釋放資源。另外,這里還有一處要點是調用了followUpRequest方法進行重定向。這個方法非常長,這里簡單講下是干嘛的:先獲得傳入的響應體的code,根據不同的code去構建不同的request進行重定向。如果不需要重定向,直接返回null,如果是301、302這樣的重定向響應碼,使用重定向的url重新構建request,在上面的死循環中重新進行請求。
到這里RetryAndFollowUpInterceptor就講的差不多了,它主要負責的任務就是失敗重連,和重定向重新構建request重連,是不是還挺簡單的?
二、BridgeInterceptor
BridgeInterceptor是五個攔截器中最簡單的一個了,它做的事情很簡單,正如其名字,它是一個連接網絡數據流和我們程序能用的request、response對象的橋梁。
@Override public Response intercept(Chain chain) throws IOException {
Request userRequest = chain.request();
Request.Builder requestBuilder = userRequest.newBuilder();
RequestBody body = userRequest.body();
if (body != null) {
MediaType contentType = body.contentType();
if (contentType != null) {
requestBuilder.header("Content-Type", contentType.toString());
}
long contentLength = body.contentLength();
if (contentLength != -1) {
requestBuilder.header("Content-Length", Long.toString(contentLength));
requestBuilder.removeHeader("Transfer-Encoding");
} else {
requestBuilder.header("Transfer-Encoding", "chunked");
requestBuilder.removeHeader("Content-Length");
}
}
if (userRequest.header("Host") == null) {
requestBuilder.header("Host", hostHeader(userRequest.url(), false));
}
if (userRequest.header("Connection") == null) {
requestBuilder.header("Connection", "Keep-Alive");
}
// If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
// the transfer stream.
boolean transparentGzip = false;
if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
transparentGzip = true;
requestBuilder.header("Accept-Encoding", "gzip");
}
List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
if (!cookies.isEmpty()) {
requestBuilder.header("Cookie", cookieHeader(cookies));
}
if (userRequest.header("User-Agent") == null) {
requestBuilder.header("User-Agent", Version.userAgent());
}
Response networkResponse = chain.proceed(requestBuilder.build());
HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
Response.Builder responseBuilder = networkResponse.newBuilder()
.request(userRequest);
if (transparentGzip
&& "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
&& HttpHeaders.hasBody(networkResponse)) {
GzipSource responseBody = new GzipSource(networkResponse.body().source());
Headers strippedHeaders = networkResponse.headers().newBuilder()
.removeAll("Content-Encoding")
.removeAll("Content-Length")
.build();
responseBuilder.headers(strippedHeaders);
responseBuilder.body(new RealResponseBody(strippedHeaders, Okio.buffer(responseBody)));
}
return responseBuilder.build();
}
可以看到它將我們傳入的request新增了好幾個請求頭,這就是為什么我們平時使用的時候即使完全不加請求頭,抓出來的包依然有好多請求頭。這里要注意一下“Keep-Alive”,這個請求頭是告訴服務器,在請求完成后不要斷開連接。我們知道http是短鏈接,為什么okhttp還要默認加這么個請求頭呢?這是為了復用連接,當再次發起相同請求的時候,可以節省再次開啟連接的消耗。
當得到response的時候,如果http連接使用了gzip壓縮,會對響應流進行gzip解壓縮,因此我們自己在使用的時候完全不需要再做這些操作。
三、CacheInterceptor
CacheInterceptor顧名思義,是負責緩存相關處理的一個攔截器。由于在下一章我會專門講okhttp的緩存機制,這里有些細節我會先跳過。它的intercept代碼非常長,而且每一段都要分析,所以我分開來一段一段地分析。
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;
long now = System.currentTimeMillis();
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;
首先從cache對象根據當前的request取出緩存的response命名為cacheCandidate,也就是候選的緩存,不一定會使用。之后使用當前時間、request和候選緩存構建出了一個CacheStrategy對象,這個對象是負責緩存策略實施的。它會根據不同的請求碼、響應碼返回不同的networkRequest和cacheResponse。這個類會放到下一章展開討論,目前我們只要知道,它會根據request和cacheCandidate返回networkRequest(實際要進行網絡請求的request)以及cacheResponse(緩存的response)。如果networkRequest為null(比如requset中含有only-if-cached請求頭的時候),表示不進行網絡請求。當cacheResponse為null的時候,說明沒有緩存,或者當前緩存過期或者當前緩存策略不允許使用緩存等,總之就代表無緩存可用。
if (cache != null) {
cache.trackResponse(strategy);
}
if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}
Cache的trackResponse是起計數作用的,不是重點。下面一個判斷如果cacheCandidate不為null代表我們取到了緩存,cacheResponse為null代表這個緩存過期或者由于緩存策略等原因用不了,那么這個cacheCandidate沒有用了,我門需要關閉它的流。
// If we're forbidden from using the network and the cache is insufficient, fail.
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}
// If we don't need the network, we're done.
if (networkRequest == null) {
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}
前面說過了networkRequest為null代表我們將不進行網絡請求,這時候分為兩種情況。如果cacheResponse也為null說明我們沒有有效的緩存response,而我們又不會進行網絡請求,因此給上層構建了一個響應碼味504的response。如果cacheResponse不為null,說明我們有可用緩存,而此次請求又不會再請求網絡,因此直接將緩存response返回。
Response networkResponse = null;
try {
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}
// If we have a cache response too, then we're doing a conditional get.
if (cacheResponse != null) {
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();
// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}
上面networkRequest為null也就是不進行網絡請求的情況我們已經全部處理了,下面必須要進行網絡請求了,調用攔截鏈的proceed方法讓后面的攔截器去請求網絡數據得到response。如果cacheResponse不為null,那么此時我們有兩個response:一個是緩存的,一個是網絡請求返回的,此時我們需要處理到底使用哪個。此時如果服務器返回的響應碼為HTTP_NOT_MODIFIED,也就是我們常見的304,代表服務器的資源沒有變化,客戶端去取本地緩存即可,此時服務器不會返回響應體。那么這個時候我們會使用緩存的cacheResponse構建一個新的response并返回。同時我們會記錄這次我們的cache命中了,因為okhttp默認使用了LruCache算法,Least Recently Used最近最少使用原則,需要我們在使用的時候計數。同時把新構建的response刷新到緩存中去。當然了,如果沒有返回304,說明緩存已經過期,我們需要將它的流關閉。
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
if (cache != null) {
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
CacheRequest cacheRequest = cache.put(response);
return cacheWritingResponse(cacheRequest, response);
}
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}
return response;
最后剩下來的情況就是緩存過期,我們需要使用網絡返回的response這種情況。直接使用networkResponse構建response并返回。此時我們還需要做一件事,就是更新我們的緩存,將最終response寫入到cache對象中去。但此時如果我們的請求方式不支持緩存(常見的兩種請求方式,get請求支持緩存,post不支持),我們不但不能更新緩存,還要將緩存刪掉。
至此,緩存攔截器也講的差不多了,在下一章我將會進一步講解okhttp的緩存策略實現。
四、ConnectInterceptor
經歷過前面三個攔截器的處理,我們發現目前為止我們還沒有進行真正的請求。別急,ConnectInterceptor就是一個負責建立http連接的攔截器。它的intercept代碼不多,趕緊來看下:
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
StreamAllocation streamAllocation = realChain.streamAllocation();
// We need the network to satisfy this request. Possibly for validating a conditional GET.
boolean doExtensiveHealthChecks = !request.method().equals("GET");
HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();
return realChain.proceed(request, streamAllocation, httpCodec, connection);
}
非常簡短是不是,它只做了兩件事,一是生成HttpCodec對象,該對象是給http請求/響應編碼、解碼的。另外還調用了streamAllocation.connection方法取出RealConnection連接對象。但是取出connection對象后,我找了半天沒找到在哪建立連接的,最后發現其實是在streamAllocation.newStream()方法中建立的,來看看這個方法:
public HttpCodec newStream(OkHttpClient client, boolean doExtensiveHealthChecks) {
int connectTimeout = client.connectTimeoutMillis();
int readTimeout = client.readTimeoutMillis();
int writeTimeout = client.writeTimeoutMillis();
boolean connectionRetryEnabled = client.retryOnConnectionFailure();
try {
RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
writeTimeout, connectionRetryEnabled, doExtensiveHealthChecks);
HttpCodec resultCodec = resultConnection.newCodec(client, this);
synchronized (connectionPool) {
codec = resultCodec;
return resultCodec;
}
} catch (IOException e) {
throw new RouteException(e);
}
}
這里從okhttpclient中取出了一些連接參數,調用了findHealthyConnection方法獲得連接,并且通過這個RealConnection對象創建了此次請求的HttpCodec對象。點進來看看findHealthyConnection方法,該方法是個無限循環,直到findConnection方法找到健康的連接,直接看findConnection方法:
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
boolean connectionRetryEnabled) throws IOException {
Route selectedRoute;
synchronized (connectionPool) {
if (released) throw new IllegalStateException("released");
if (codec != null) throw new IllegalStateException("codec != null");
if (canceled) throw new IOException("Canceled");
// Attempt to use an already-allocated connection.
RealConnection allocatedConnection = this.connection;
if (allocatedConnection != null && !allocatedConnection.noNewStreams) {
return allocatedConnection;
}
// Attempt to get a connection from the pool.
Internal.instance.get(connectionPool, address, this, null);
if (connection != null) {
return connection;
}
selectedRoute = route;
}
// If we need a route, make one. This is a blocking operation.
if (selectedRoute == null) {
selectedRoute = routeSelector.next();
}
RealConnection result;
synchronized (connectionPool) {
if (canceled) throw new IOException("Canceled");
// Now that we have an IP address, make another attempt at getting a connection from the pool.
// This could match due to connection coalescing.
Internal.instance.get(connectionPool, address, this, selectedRoute);
if (connection != null) {
route = selectedRoute;
return connection;
}
// Create a connection and assign it to this allocation immediately. This makes it possible
// for an asynchronous cancel() to interrupt the handshake we're about to do.
route = selectedRoute;
refusedStreamCount = 0;
result = new RealConnection(connectionPool, selectedRoute);
acquire(result);
}
// Do TCP + TLS handshakes. This is a blocking operation.
result.connect(connectTimeout, readTimeout, writeTimeout, connectionRetryEnabled);
routeDatabase().connected(result.route());
Socket socket = null;
synchronized (connectionPool) {
// Pool the connection.
Internal.instance.put(connectionPool, result);
// If another multiplexed connection to the same address was created concurrently, then
// release this connection and acquire that one.
if (result.isMultiplexed()) {
socket = Internal.instance.deduplicate(connectionPool, address, this);
result = connection;
}
}
closeQuietly(socket);
return result;
}
又是一個比較長的方法。前半段是在嘗試尋找connection,最開始尋找是否有可以復用的已經建立好連接并且空閑的connection。沒有的話繼續從連接池里取,如果連接池里沒有,就需要new一個connection并加入連接池。有了connection對象后,終于可以調用其connect方法真正進行連接啦!看注釋可以知道,這里的connect方法進行了TCP + TLS握手操作,是個阻塞操作。同時這里還會判斷下連接是否multiplexed,也就是和當前某個已有的連接重復了,如果重復的話會釋放該連接。已經到這步了,當然要看看connect方法是怎樣建立連接的:
public void connect(
int connectTimeout, int readTimeout, int writeTimeout, boolean connectionRetryEnabled) {
if (protocol != null) throw new IllegalStateException("already connected");
RouteException routeException = null;
List<ConnectionSpec> connectionSpecs = route.address().connectionSpecs();
ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);
if (route.address().sslSocketFactory() == null) {
if (!connectionSpecs.contains(ConnectionSpec.CLEARTEXT)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication not enabled for client"));
}
String host = route.address().url().host();
if (!Platform.get().isCleartextTrafficPermitted(host)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication to " + host + " not permitted by network security policy"));
}
}
while (true) {
try {
if (route.requiresTunnel()) {
connectTunnel(connectTimeout, readTimeout, writeTimeout);
} else {
connectSocket(connectTimeout, readTimeout);
}
establishProtocol(connectionSpecSelector);
break;
} catch (IOException e) {
closeQuietly(socket);
closeQuietly(rawSocket);
socket = null;
rawSocket = null;
source = null;
sink = null;
handshake = null;
protocol = null;
http2Connection = null;
if (routeException == null) {
routeException = new RouteException(e);
} else {
routeException.addConnectException(e);
}
if (!connectionRetryEnabled || !connectionSpecSelector.connectionFailed(e)) {
throw routeException;
}
}
}
if (http2Connection != null) {
synchronized (connectionPool) {
allocationLimit = http2Connection.maxConcurrentStreams();
}
}
}
這里的代碼比較接近底層了,本人看起來也是十分吃力。首先它會判斷Protocol對象是否為null,Protocol是封裝的通信協議對象,如果已經存在,則表明已連接。然后會根據路由地址獲取connectionSpecs集合,ConnectionSpec點進去看了下是更一步的有TLS協議的連接。之后判斷如果TLS協議不合格,會拋RouteException不進行連接。這也從一方面解釋了okhttp為什么強調它是一個很安全的http框架。
緊接著是一個無限循環進行連接,調用establishProtocol建立真正的Client-Server通信,這個方法會拋出IO異常。如果順利的話跳出循環,有異常的話會根據情況是否拋異常放棄連接還是繼續嘗試下一次連接。最后還會對http流的數量做一個限制。
繼續往下看的話establishProtocol方法會調用connectTls方法進行三次握手操作,與https有關的ssl認證過程就在其中,限于篇幅就不仔細講了。最終會調用Http2Connection對象的start方法開啟連接。這個過程的細節非常多,由于我們重點是講攔截器的,這里就此打住。以后有機會的再開一篇文章專門講這里的代碼。
CallServerInterceptor
這是okhttp網絡請求的最后一個攔截器,ConnectInterceptor是負責建立http連接的,而它負責真正向服務器發起請求。直接看它的intercept代碼:
RealInterceptorChain realChain = (RealInterceptorChain) chain;
HttpCodec httpCodec = realChain.httpStream();
StreamAllocation streamAllocation = realChain.streamAllocation();
RealConnection connection = (RealConnection) realChain.connection();
Request request = realChain.request();
long sentRequestMillis = System.currentTimeMillis();
httpCodec.writeRequestHeaders(request);
首先獲取HttpCodec和RealConnection連接對象,通過HttpCodec編碼最終的請求頭。
Response.Builder responseBuilder = null;
if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
// If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
// Continue" response before transmitting the request body. If we don't get that, return what
// we did get (such as a 4xx response) without ever transmitting the request body.
if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
httpCodec.flushRequest();
responseBuilder = httpCodec.readResponseHeaders(true);
}
if (responseBuilder == null) {
// Write the request body if the "Expect: 100-continue" expectation was met.
Sink requestBodyOut = httpCodec.createRequestBody(request, request.body().contentLength());
BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
} else if (!connection.isMultiplexed()) {
// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection from
// being reused. Otherwise we're still obligated to transmit the request body to leave the
// connection in a consistent state.
streamAllocation.noNewStreams();
}
}
這段代碼則是編碼請求體的,首先判斷下請求方法是否支持請求體(比如get請求就沒有請求體),然后通過HttpCodec編碼最終的請求體。
httpCodec.finishRequest();
if (responseBuilder == null) {
responseBuilder = httpCodec.readResponseHeaders(false);
}
Response response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
int code = response.code();
if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null response body.
response = response.newBuilder()
.body(Util.EMPTY_RESPONSE)
.build();
} else {
response = response.newBuilder()
.body(httpCodec.openResponseBody(response))
.build();
}
if ("close".equalsIgnoreCase(response.request().header("Connection"))
|| "close".equalsIgnoreCase(response.header("Connection"))) {
streamAllocation.noNewStreams();
}
if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
throw new ProtocolException(
"HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
}
return response;
最終的requst完成后,通過HttpCodec讀取響應頭和響應體并進行解碼,構建最終的response。其中如果響應碼為101(表明要升級協議),則返回一個空的response。204或205(表明響應體沒有內容)但響應體長度缺不為0則拋異常。如果Connection響應頭為close,表明請求完畢,需要關閉流。
最終將response逐層向上返回,經過所有攔截器的處理后返回給用戶。
終于寫完了,限于我的知識水平有限,有錯誤的地方請大家及時糾正,感謝您的閱讀!