【SpringCloud技術專題】「Eureka源碼分析」從源碼層面讓你認識Eureka工作流程和運作機制(下)

承接上文的對應的Eureka的上篇介紹,我們開始介紹,詳見 [【SpringCloud技術專題】「Eureka源碼分析」從源碼層面讓你認識Eureka工作流程和運作機制(上)]

原理回顧

  1. Eureka Server 提供服務注冊服務,各個節點啟動后,會在Eureka Server中進行注冊,這樣Eureka Server中的服務注冊表中將會存儲所有可用服務節點的信息,服務節點的信息可以在界面中直觀的看到。
  2. Eureka Client 是一個Java 客戶端,用于簡化與Eureka Server的交互,客戶端同時也具備一個內置的、使用輪詢負載算法的負載均衡器。
  3. 在應用啟動后,將會向Eureka Server發送心跳(默認周期為30秒),如果Eureka Server在多個心跳周期(默認3個心跳周期=90秒)沒有收到某個節點的心跳,Eureka Server將會從服務注冊表中把這個服務節點移除。
  4. 高可用情況下的:Eureka Server之間將會通過復制的方式完成數據的同步;
  5. Eureka Client具有緩存的機制,即使所有的Eureka Server 都掛掉的話,客戶端依然可以利用緩存中的信息消費其它服務的API;

EurekaServer 啟動流程分析

EurekaServer 處理服務注冊、集群數據復制

EurekaClient 是如何注冊到 EurekaServer 的?

剛才在org.springframework.cloud.netflix.eureka.server.InstanceRegistry 的每個方法都打了一個斷點,而且現在EurekaServer已經處于Debug運行狀態,那么我們就隨便找一個被 @EnableEurekaClient 的微服務啟動試試微服務來試試吧,直接Run。

  • 當啟動后,就一定會調用注冊register方法,那么就接著往下看,拭目以待;
實例注冊方法機制
InstanceRegistry.register(final InstanceInfo info, final boolean isReplication) 方法進斷點了。
  • InstanceRegistry.register順著堆棧信息往上看,是 ApplicationResource.addInstance 方法被調用了,分析addInstance;

ApplicationResource 類

主要是處理接收 Http 的服務請求。

@POST
@Consumes({"application/json", "application/xml"})
public Response addInstance(InstanceInfo info,
                            @HeaderParam(PeerEurekaNode.HEADER_REPLICATION) String isReplication) {
    logger.debug("Registering instance {} (replication={})", info.getId(), isReplication);
    // validate that the instanceinfo contains all the necessary required fields
    if (isBlank(info.getId())) {
        return Response.status(400).entity("Missing instanceId").build();
    } else if (isBlank(info.getHostName())) {
        return Response.status(400).entity("Missing hostname").build();
    } else if (isBlank(info.getAppName())) {
        return Response.status(400).entity("Missing appName").build();
    } else if (!appName.equals(info.getAppName())) {
        return Response.status(400).entity("Mismatched appName, expecting " + appName + " but was " + info.getAppName()).build();
    } else if (info.getDataCenterInfo() == null) {
        return Response.status(400).entity("Missing dataCenterInfo").build();
    } else if (info.getDataCenterInfo().getName() == null) {
        return Response.status(400).entity("Missing dataCenterInfo Name").build();
    }

    // handle cases where clients may be registering with bad DataCenterInfo with missing data
    DataCenterInfo dataCenterInfo = info.getDataCenterInfo();
    if (dataCenterInfo instanceof UniqueIdentifier) {
        String dataCenterInfoId = ((UniqueIdentifier) dataCenterInfo).getId();
        if (isBlank(dataCenterInfoId)) {
            boolean experimental = "true".equalsIgnoreCase(serverConfig.getExperimental("registration.validation.dataCenterInfoId"));
            if (experimental) {
                String entity = "DataCenterInfo of type " + dataCenterInfo.getClass() + " must contain a valid id";
                return Response.status(400).entity(entity).build();
            } else if (dataCenterInfo instanceof AmazonInfo) {
                AmazonInfo amazonInfo = (AmazonInfo) dataCenterInfo;
                String effectiveId = amazonInfo.get(AmazonInfo.MetaDataKey.instanceId);
                if (effectiveId == null) {
                 amazonInfo.getMetadata().put(AmazonInfo.MetaDataKey.instanceId.getName(), info.getId());
                }
            } else {
                logger.warn("Registering DataCenterInfo of type {} without an appropriate id", dataCenterInfo.getClass());
            }
        }
    }
    registry.register(info, "true".equals(isReplication));
    return Response.status(204).build();  // 204 to be backwards compatible
}
  • 這里的寫法貌似看起來和我們之前 Controller 的 RESTFUL 寫法有點不一樣,仔細一看,原來是Jersey RESTful 框架,是一個產品級的RESTful service 和 client 框架。與Struts類似,它同樣可以和hibernate,spring框架整合。

  • 看到 registry.register(info, "true".equals(isReplication)); 注冊啊,原來EurekaClient客戶端啟動后會調用會通過Http(s)請求,直接調到ApplicationResource.addInstance 方法,只要是和注冊有關的,都會調用這個方法。

  • 接著我們深入 registry.register(info, "true".equals(isReplication)) 查看;

@Override
public void register(final InstanceInfo info, final boolean isReplication) {
    handleRegistration(info, resolveInstanceLeaseDuration(info), isReplication);
    super.register(info, isReplication);
}
  • handleRegistration(info, resolveInstanceLeaseDuration(info), isReplication) 方法;
private void handleRegistration(InstanceInfo info, int leaseDuration,
        boolean isReplication) {
    log("register " + info.getAppName() + ", vip " + info.getVIPAddress()
            + ", leaseDuration " + leaseDuration + ", isReplication "
            + isReplication);
    publishEvent(new EurekaInstanceRegisteredEvent(this, info, leaseDuration,
            isReplication));
}
  • 然后通過 ApplicationContext 發布了一個事件 EurekaInstanceRegisteredEvent 服務注冊事件,可以給 EurekaInstanceRegisteredEvent 添加監聽事件,那么用戶就可以在此刻實現自己想要的一些業務邏輯。
  • 然后我們再來看看 super.register(info, isReplication) 方法,該方法是 InstanceRegistry 的父類 PeerAwareInstanceRegistryImpl 的方法。

服務戶廁機制

進入PeerAwareInstanceRegistryImpl 類的 register(final InstanceInfo info, final boolean isReplication) 方法;

@Override
public void register(final InstanceInfo info, final boolean isReplication) {
    // 注釋:續約時間,默認時間是常量值 90 秒
    int leaseDuration = Lease.DEFAULT_DURATION_IN_SECS;
    // 注釋:續約時間,當然也可以從配置文件中取出來,所以說續約時間值也是可以讓我們自己自定義配置的
    if (info.getLeaseInfo() != null && info.getLeaseInfo().getDurationInSecs() > 0) {
        leaseDuration = info.getLeaseInfo().getDurationInSecs();
    }
    // 注釋:將注冊方的信息寫入 EurekaServer 的注冊表,父類為 AbstractInstanceRegistry
    super.register(info, leaseDuration, isReplication);
    // 注釋:EurekaServer 節點之間的數據同步,復制到其他Peer
    replicateToPeers(Action.Register, info.getAppName(), info.getId(), info, null, isReplication);
}

進入super.register(info, leaseDuration, isReplication),如何寫入EurekaServer 的注冊表的,進入AbstractInstanceRegistry.register(InstanceInfo registrant, int leaseDuration, boolean isReplication) 方法。

public void register(InstanceInfo registrant, int leaseDuration, boolean isReplication) {
    try {
        read.lock();
        // 注釋:registry 這個變量,就是我們所謂的注冊表,注冊表是保存在內存中的;
        Map<String, Lease<InstanceInfo>> gMap = registry.get(registrant.getAppName());
        REGISTER.increment(isReplication);
        if (gMap == null) {
            final ConcurrentHashMap<String, Lease<InstanceInfo>> gNewMap = new ConcurrentHashMap<String, Lease<InstanceInfo>>();
            gMap = registry.putIfAbsent(registrant.getAppName(), gNewMap);
            if (gMap == null) {
                gMap = gNewMap;
            }
        }
        Lease<InstanceInfo> existingLease = gMap.get(registrant.getId());
        // Retain the last dirty timestamp without overwriting it, if there is already a lease
        if (existingLease != null && (existingLease.getHolder() != null)) {
            Long existingLastDirtyTimestamp = existingLease.getHolder().getLastDirtyTimestamp();
            Long registrationLastDirtyTimestamp = registrant.getLastDirtyTimestamp();
            logger.debug("Existing lease found (existing={}, provided={}", existingLastDirtyTimestamp, registrationLastDirtyTimestamp);
            if (existingLastDirtyTimestamp > registrationLastDirtyTimestamp) {
                logger.warn("There is an existing lease and the existing lease's dirty timestamp {} is greater" +
                        " than the one that is being registered {}", existingLastDirtyTimestamp, registrationLastDirtyTimestamp);
                logger.warn("Using the existing instanceInfo instead of the new instanceInfo as the registrant");
                registrant = existingLease.getHolder();
            }
        } else {
            // The lease does not exist and hence it is a new registration
            synchronized (lock) {
                if (this.expectedNumberOfRenewsPerMin > 0) {
                    // Since the client wants to cancel it, reduce the threshold
                    // (1
                    // for 30 seconds, 2 for a minute)
                    this.expectedNumberOfRenewsPerMin = this.expectedNumberOfRenewsPerMin + 2;
                    this.numberOfRenewsPerMinThreshold =
                            (int) (this.expectedNumberOfRenewsPerMin * serverConfig.getRenewalPercentThreshold());
                }
            }
            logger.debug("No previous lease information found; it is new registration");
        }
        Lease<InstanceInfo> lease = new Lease<InstanceInfo>(registrant, leaseDuration);
        if (existingLease != null) {
            lease.setServiceUpTimestamp(existingLease.getServiceUpTimestamp());
        }
        gMap.put(registrant.getId(), lease);
        synchronized (recentRegisteredQueue) {
            recentRegisteredQueue.add(new Pair<Long, String>(
                    System.currentTimeMillis(),
                    registrant.getAppName() + "(" + registrant.getId() + ")"));
        }
        // This is where the initial state transfer of overridden status happens
        if (!InstanceStatus.UNKNOWN.equals(registrant.getOverriddenStatus())) {
            logger.debug("Found overridden status {} for instance {}. Checking to see if needs to be add to the "
                            + "overrides", registrant.getOverriddenStatus(), registrant.getId());
            if (!overriddenInstanceStatusMap.containsKey(registrant.getId())) {
                logger.info("Not found overridden id {} and hence adding it", registrant.getId());
                overriddenInstanceStatusMap.put(registrant.getId(), registrant.getOverriddenStatus());
            }
        }
        InstanceStatus overriddenStatusFromMap = overriddenInstanceStatusMap.get(registrant.getId());
        if (overriddenStatusFromMap != null) {
            logger.info("Storing overridden status {} from map", overriddenStatusFromMap);
            registrant.setOverriddenStatus(overriddenStatusFromMap);
        }

        // Set the status based on the overridden status rules
        InstanceStatus overriddenInstanceStatus = getOverriddenInstanceStatus(registrant, existingLease, isReplication);
        registrant.setStatusWithoutDirty(overriddenInstanceStatus);

        // If the lease is registered with UP status, set lease service up timestamp
        if (InstanceStatus.UP.equals(registrant.getStatus())) {
            lease.serviceUp();
        }
        registrant.setActionType(ActionType.ADDED);
        recentlyChangedQueue.add(new RecentlyChangedItem(lease));
        registrant.setLastUpdatedTimestamp();
        invalidateCache(registrant.getAppName(), registrant.getVIPAddress(), registrant.getSecureVipAddress());
        logger.info("Registered instance {}/{} with status {} (replication={})",
                registrant.getAppName(), registrant.getId(), registrant.getStatus(), isReplication);
    } finally {
        read.unlock();
    }
}
  • 發現這個方法有點長,大致閱讀,主要更新了注冊表的時間之外,還更新了緩存等其它東西,大家有興趣的可以深究閱讀該方法;

集群之間的復制

replicateToPeers(Action.Register, info.getAppName(), info.getId(), info, null, isReplication) 的這個方法。

private void replicateToPeers(Action action, String appName, String id,
                              InstanceInfo info /* optional */,
                              InstanceStatus newStatus /* optional */, boolean isReplication) {
    Stopwatch tracer = action.getTimer().start();
    try {
        if (isReplication) {
            numberOfReplicationsLastMin.increment();
        }
        // If it is a replication already, do not replicate again as this will create a poison replication
        // 注釋:如果已經復制過,就不再復制  
        if (peerEurekaNodes == Collections.EMPTY_LIST || isReplication) {
            return;
        }
        // 遍歷Eureka Server集群中的所有節點,進行復制操作 
        for (final PeerEurekaNode node : peerEurekaNodes.getPeerEurekaNodes()) {
            // If the url represents this host, do not replicate to yourself.
            if (peerEurekaNodes.isThisMyUrl(node.getServiceUrl())) {
                continue;
            }
            // 沒有復制過,遍歷Eureka Server集群中的node節點,依次操作,包括取消、注冊、心跳、狀態更新等。
            replicateInstanceActionsToPeers(action, appName, id, info, newStatus, node);
        }
    } finally {
        tracer.stop();
    }
}
  • 每當有注冊請求,首先更新 EurekaServer 的注冊表,然后再將信息同步到其它EurekaServer的節點上去;

  • 接下來我們看看 node 節點是如何進行復制操作的,進入 replicateInstanceActionsToPeers 方法。

private void replicateInstanceActionsToPeers(Action action, String appName,
                                             String id, InstanceInfo info, InstanceStatus newStatus,
                                             PeerEurekaNode node) {
    try {
        InstanceInfo infoFromRegistry = null;
        CurrentRequestVersion.set(Version.V2);
        switch (action) {
            case Cancel:
                node.cancel(appName, id);
                break;
            case Heartbeat:
                InstanceStatus overriddenStatus = overriddenInstanceStatusMap.get(id);
                infoFromRegistry = getInstanceByAppAndId(appName, id, false);
                node.heartbeat(appName, id, infoFromRegistry, overriddenStatus, false);
                break;
            case Register:
                node.register(info);
                break;
            case StatusUpdate:
                infoFromRegistry = getInstanceByAppAndId(appName, id, false);
                node.statusUpdate(appName, id, newStatus, infoFromRegistry);
                break;
            case DeleteStatusOverride:
                infoFromRegistry = getInstanceByAppAndId(appName, id, false);
                node.deleteStatusOverride(appName, id, infoFromRegistry);
                break;
        }
    } catch (Throwable t) {
        logger.error("Cannot replicate information to {} for action {}", node.getServiceUrl(), action.name(), t);
    }
}
  • 節點之間的復制狀態操作,都在這里體現的淋漓盡致,那么我們就拿 Register 類型 node.register(info) 來看,我們來看看 node 究竟是如何做到同步信息的,進入 node.register(info) 方法看看;

同級之間的復制機制

PeerEurekaNode.register(final InstanceInfo info) 方法,一窺究竟如何同步數據。

public void register(final InstanceInfo info) throws Exception {
    // 注釋:任務過期時間給任務分發器處理,默認時間偏移當前時間 30秒
    long expiryTime = System.currentTimeMillis() + getLeaseRenewalOf(info);
    batchingDispatcher.process(
            taskId("register", info),
            new InstanceReplicationTask(targetHost, Action.Register, info, null, true) {
                public EurekaHttpResponse<Void> execute() {
                    return replicationClient.register(info);
                }
            },
            expiryTime
    );
}
  • 這里涉及到了 Eureka 的任務批處理,通常情況下Peer之間的同步需要調用多次,如果EurekaServer一多的話,那么將會有很多http請求,所
    以自然而然的孕育出了任務批處理,但是也在一定程度上導致了注冊和下線的一些延遲,突出優勢的同時也勢必會造成一些劣勢,但是這些延遲情況還是能符合
    常理在容忍范圍之內的。

  • 在 expiryTime 超時時間之內,批次處理要做的事情就是合并任務為一個List,然后發送請求的時候,將這個批次List直接打包發送請求出去,這樣的話,在這個批次的List里面,可能包含取消、注冊、心跳、狀態等一系列狀態的集合List。

  • 我們再接著看源碼,batchingDispatcher.process 這么一調用,然后我們就直接看這個 TaskDispatchers.createBatchingTaskDispatcher 方法。

public static <ID, T> TaskDispatcher<ID, T> createBatchingTaskDispatcher(String id,
                                                                             int maxBufferSize,
                                                                             int workloadSize,
                                                                             int workerCount,
                                                                             long maxBatchingDelay,
                                                                             long congestionRetryDelayMs,
                                                                             long networkFailureRetryMs,
                                                                             TaskProcessor<T> taskProcessor) {
        final AcceptorExecutor<ID, T> acceptorExecutor = new AcceptorExecutor<>(
                id, maxBufferSize, workloadSize, maxBatchingDelay, congestionRetryDelayMs, networkFailureRetryMs
        );
        final TaskExecutors<ID, T> taskExecutor = TaskExecutors.batchExecutors(id, workerCount, taskProcessor, acceptorExecutor);
        return new TaskDispatcher<ID, T>() {
            @Override
            public void process(ID id, T task, long expiryTime) {
                acceptorExecutor.process(id, task, expiryTime);
            }

            @Override
            public void shutdown() {
                acceptorExecutor.shutdown();
                taskExecutor.shutdown();
            }
        };
    }
  • 這里的 process 方法會將任務添加到隊列中,有入隊列自然有出隊列,具體怎么取任務,我就不一一給大家講解了,我就講講最后是怎么觸發任務的。進入 final TaskExecutors<ID, T> taskExecutor = TaskExecutors.batchExecutors(id, workerCount, taskProcessor, acceptorExecutor) 這句代碼的 TaskExecutors.batchExecutors 方法。
static <ID, T> TaskExecutors<ID, T> batchExecutors(final String name,
                                                   int workerCount,
                                                   final TaskProcessor<T> processor,
                                                   final AcceptorExecutor<ID, T> acceptorExecutor) {
    final AtomicBoolean isShutdown = new AtomicBoolean();
    final TaskExecutorMetrics metrics = new TaskExecutorMetrics(name);
    return new TaskExecutors<>(new WorkerRunnableFactory<ID, T>() {
        @Override
        public WorkerRunnable<ID, T> create(int idx) {
            return new BatchWorkerRunnable<>("TaskBatchingWorker-" +name + '-' + idx, isShutdown, metrics, processor, acceptorExecutor);
        }
    }, workerCount, isShutdown);
}
  • 我們發現 TaskExecutors 類中的 batchExecutors 這個靜態方法,有個 BatchWorkerRunnable 返回的實現類,因此我們再次進入 BatchWorkerRunnable 類看看究竟,而且既然是 Runnable,那么勢必會有 run 方法。
@Override
public void run() {
    try {
        while (!isShutdown.get()) {
            // 注釋:獲取信號量釋放 batchWorkRequests.release(),返回任務集合列表
            List<TaskHolder<ID, T>> holders = getWork();
            metrics.registerExpiryTimes(holders);

            List<T> tasks = getTasksOf(holders);
            // 注釋:將批量任務打包請求Peer節點
            ProcessingResult result = processor.process(tasks);
            switch (result) {
                case Success:
                    break;
                case Congestion:
                case TransientError:
                    taskDispatcher.reprocess(holders, result);
                    break;
                case PermanentError:
                    logger.warn("Discarding {} tasks of {} due to permanent error", holders.size(), workerName);
            }
            metrics.registerTaskResult(result, tasks.size());
        }
    } catch (InterruptedException e) {
        // Ignore
    } catch (Throwable e) {
        // Safe-guard, so we never exit this loop in an uncontrolled way.
        logger.warn("Discovery WorkerThread error", e);
    }
}
  • 這就是我們 BatchWorkerRunnable 類的 run 方法,這里面首先要獲取信號量釋放,才能獲得任務集合,一旦獲取到了任務集合的話,那么就直接調用 processor.process(tasks) 方法請求 Peer 節點同步數據,接下來我們看看 ReplicationTaskProcessor.process 方法;
@Override
public ProcessingResult process(List<ReplicationTask> tasks) {
    ReplicationList list = createReplicationListOf(tasks);
    try {
        // 注釋:這里通過 JerseyReplicationClient 客戶端對象直接發送list請求數據
        EurekaHttpResponse<ReplicationListResponse> response = replicationClient.submitBatchUpdates(list);
        int statusCode = response.getStatusCode();
        if (!isSuccess(statusCode)) {
            if (statusCode == 503) {
                logger.warn("Server busy (503) HTTP status code received from the peer {}; rescheduling tasks after delay", peerId);
                return ProcessingResult.Congestion;
            } else {
                // Unexpected error returned from the server. This should ideally never happen.
                logger.error("Batch update failure with HTTP status code {}; discarding {} replication tasks", statusCode, tasks.size());
                return ProcessingResult.PermanentError;
            }
        } else {
            handleBatchResponse(tasks, response.getEntity().getResponseList());
        }
    } catch (Throwable e) {
        if (isNetworkConnectException(e)) {
            logNetworkErrorSample(null, e);
            return ProcessingResult.TransientError;
        } else {
            logger.error("Not re-trying this exception because it does not seem to be a network exception", e);
            return ProcessingResult.PermanentError;
        }
    }
    return ProcessingResult.Success;
}
  • 感覺快要見到真相了,所以我們迫不及待的進入 JerseyReplicationClient.submitBatchUpdates(ReplicationList replicationList) 方法一窺究竟。
@Override
public EurekaHttpResponse<ReplicationListResponse> submitBatchUpdates(ReplicationList replicationList) {
    ClientResponse response = null;
    try {
        response = jerseyApacheClient.resource(serviceUrl)
                // 注釋:這才是重點,請求目的相對路徑,peerreplication/batch/
                .path(PeerEurekaNode.BATCH_URL_PATH)
                .accept(MediaType.APPLICATION_JSON_TYPE)
                .type(MediaType.APPLICATION_JSON_TYPE)
                .post(ClientResponse.class, replicationList);
        if (!isSuccess(response.getStatus())) {
            return anEurekaHttpResponse(response.getStatus(), ReplicationListResponse.class).build();
        }
        ReplicationListResponse batchResponse = response.getEntity(ReplicationListResponse.class);
        return anEurekaHttpResponse(response.getStatus(), batchResponse).type(MediaType.APPLICATION_JSON_TYPE).build();
    } finally {
        if (response != null) {
            response.close();
        }
    }
}
  • 看到了相對路徑地址,我們搜索下"batch"這樣的字符串看看有沒有對應的接收方法或者被@Path注解進入的;在 eureka-core-1.4.12.jar 這個包下面,果然搜到到了 @Path("batch") 這樣的字樣,直接進入,發現這是 PeerReplicationResource 類的方法 batchReplication,我們進入這方法看看。
@Path("batch")
@POST
public Response batchReplication(ReplicationList replicationList) {
    try {
        ReplicationListResponse batchResponse = new ReplicationListResponse();
        // 注釋:這里將收到的任務列表,依次循環解析處理,主要核心方法在 dispatch 方法中。
        for (ReplicationInstance instanceInfo : replicationList.getReplicationList()) {
            try {
                batchResponse.addResponse(dispatch(instanceInfo));
            } catch (Exception e) {
                batchResponse.addResponse(new ReplicationInstanceResponse(Status.INTERNAL_SERVER_ERROR.getStatusCode(), null));
                logger.error(instanceInfo.getAction() + " request processing failed for batch item "
                        + instanceInfo.getAppName() + '/' + instanceInfo.getId(), e);
            }
        }
        return Response.ok(batchResponse).build();
    } catch (Throwable e) {
        logger.error("Cannot execute batch Request", e);
        return Response.status(Status.INTERNAL_SERVER_ERROR).build();
    }
}
  • 看到了循環一次遍歷任務進行處理,不知不覺覺得心花怒放,勝利的重點馬上就要到來了,我們進入 PeerReplicationResource.dispatch 方法看看。
private ReplicationInstanceResponse dispatch(ReplicationInstance instanceInfo) {
    ApplicationResource applicationResource = createApplicationResource(instanceInfo);
    InstanceResource resource = createInstanceResource(instanceInfo, applicationResource);

    String lastDirtyTimestamp = toString(instanceInfo.getLastDirtyTimestamp());
    String overriddenStatus = toString(instanceInfo.getOverriddenStatus());
    String instanceStatus = toString(instanceInfo.getStatus());

    Builder singleResponseBuilder = new Builder();
    switch (instanceInfo.getAction()) {
        case Register:
            singleResponseBuilder = handleRegister(instanceInfo, applicationResource);
            break;
        case Heartbeat:
            singleResponseBuilder = handleHeartbeat(resource, lastDirtyTimestamp, overriddenStatus, instanceStatus);
            break;
        case Cancel:
            singleResponseBuilder = handleCancel(resource);
            break;
        case StatusUpdate:
            singleResponseBuilder = handleStatusUpdate(instanceInfo, resource);
            break;
        case DeleteStatusOverride:
            singleResponseBuilder = handleDeleteStatusOverride(instanceInfo, resource);
            break;
    }
    return singleResponseBuilder.build();
}
  • 隨便抓一個類型,那我們也拿 Register 類型來看,進入 PeerReplicationResource.handleRegister 看看。
private static Builder handleRegister(ReplicationInstance instanceInfo, ApplicationResource applicationResource) {
    // 注釋:private static final String REPLICATION = "true"; 定義的一個常量值,而且還是回調 ApplicationResource.addInstance 方法
    applicationResource.addInstance(instanceInfo.getInstanceInfo(), REPLICATION);
    return new Builder().setStatusCode(Status.OK.getStatusCode());
}
  • Peer節點的同步旅程終于結束了,最終又回調到了 ApplicationResource.addInstance 這個方法,這個方法在最終是EurekaClient啟動后注冊調用的方法,然而Peer節點的信息同步也調用了這個方法,僅僅只是通過一個變量 isReplication 為true還是false來判斷是否是節點復制。剩下的ApplicationResource.addInstance流程前面已經提到過了,相信大家已經明白了注冊的流程是如何扭轉的,包括批量任務是如何處理EurekaServer節點之間的信息同步的了。

EurekaClient 啟動流程分析

調換運行模式

Run運行discovery-eureka服務,Debug 運行 provider-user 服務,先觀察日志先;

2017-10-23 19:43:07.688  INFO 1488 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Starting beans in phase 0
2017-10-23 19:43:07.694  INFO 1488 --- [           main] o.s.c.n.eureka.InstanceInfoFactory       : Setting initial instance status as: STARTING
2017-10-23 19:43:07.874  INFO 1488 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using JSON encoding codec LegacyJacksonJson
2017-10-23 19:43:07.874  INFO 1488 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using JSON decoding codec LegacyJacksonJson
2017-10-23 19:43:07.971  INFO 1488 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using XML encoding codec XStreamXml
2017-10-23 19:43:07.971  INFO 1488 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using XML decoding codec XStreamXml
2017-10-23 19:43:08.134  INFO 1488 --- [           main] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration
2017-10-23 19:43:08.344  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Disable delta property : false
2017-10-23 19:43:08.344  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Single vip registry refresh property : null
2017-10-23 19:43:08.344  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Force full registry fetch : false
2017-10-23 19:43:08.344  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Application is null : false
2017-10-23 19:43:08.344  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Registered Applications size is zero : true
2017-10-23 19:43:08.344  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Application version is -1: true
2017-10-23 19:43:08.345  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Getting all instance registry info from the eureka server
2017-10-23 19:43:08.630  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : The response status is 200
2017-10-23 19:43:08.631  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Starting heartbeat executor: renew interval is: 30
2017-10-23 19:43:08.634  INFO 1488 --- [           main] c.n.discovery.InstanceInfoReplicator     : InstanceInfoReplicator onDemand update allowed rate per min is 4
2017-10-23 19:43:08.637  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Discovery Client initialized at timestamp 1508758988637 with initial instances count: 0
2017-10-23 19:43:08.657  INFO 1488 --- [           main] c.n.e.EurekaDiscoveryClientConfiguration : Registering application springms-provider-user with eureka with status UP
2017-10-23 19:43:08.658  INFO 1488 --- [           main] com.netflix.discovery.DiscoveryClient    : Saw local status change event StatusChangeEvent [timestamp=1508758988658, current=UP, previous=STARTING]
2017-10-23 19:43:08.659  INFO 1488 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_SPRINGMS-PROVIDER-USER/springms-provider-user:192.168.3.101:7900: registering service...
2017-10-23 19:43:08.768  INFO 1488 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 7900 (http)
2017-10-23 19:43:08.768  INFO 1488 --- [           main] c.n.e.EurekaDiscoveryClientConfiguration : Updating port to 7900
2017-10-23 19:43:08.773  INFO 1488 --- [           main] c.s.cloud.MsProviderUserApplication      : Started ProviderApplication in 882.1 seconds (JVM running for 10.398)

服務提供方主體加載流程

  • 【1】:仔細查看下日志,先是 DefaultLifecycleProcessor 類處理了一些 bean,然后接下來肯定會調用一些實現 SmartLifecycle 類的start 方法;

  • 【2】: 接著初始化設置了EurekaClient的狀態為 STARTING,初始化編碼使用的格式,哪些用JSON,哪些用XML;

  • 【3】: 緊接著打印了強制獲取注冊信息狀態為false,已注冊的應用大小為0,客戶端發送心跳續約,心跳續約間隔為30秒,最后打印Client初始化完成;

EnableEurekaClient 組件。

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Inherited
@EnableDiscoveryClient
public @interface EnableEurekaClient {}

@EnableEurekaClient

這個注解類竟然也使用了注解 @EnableDiscoveryClient,那么我們有必要去這個注解類看看。

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Inherited
@Import(EnableDiscoveryClientImportSelector.class)
public @interface EnableDiscoveryClient {}

@EnableDiscoveryClient

這個注解類有個比較特殊的注解 @Import,由此我們猜想,這里的大多數邏輯是不是都寫在這個 EnableDiscoveryClientImportSelector 類呢?

EnableDiscoveryClientImportSelector

@Order(Ordered.LOWEST_PRECEDENCE - 100)
public class EnableDiscoveryClientImportSelector
        extends SpringFactoryImportSelector<EnableDiscoveryClient> {
    @Override
    protected boolean isEnabled() {
        return new RelaxedPropertyResolver(getEnvironment()).getProperty(
                "spring.cloud.discovery.enabled", Boolean.class, Boolean.TRUE);
    }
    @Override
    protected boolean hasDefaultFactory() {
        return true;
    }
}

EnableDiscoveryClientImportSelector 類繼承了 SpringFactoryImportSelector 類,但是重寫了一個 isEnabled() 方法,默認值返回 true,為什么會返回true

/**
 * Select and return the names of which class(es) should be imported based on
 * the {@link AnnotationMetadata} of the importing @{@link Configuration} class.
 */
@Override
public String[] selectImports(AnnotationMetadata metadata) {
    if (!isEnabled()) { 
        return new String[0];
    }
    AnnotationAttributes attributes = AnnotationAttributes.fromMap(
            metadata.getAnnotationAttributes(this.annotationClass.getName(), true));
    Assert.notNull(attributes, "No " + getSimpleName() + " attributes found. Is "
            + metadata.getClassName() + " annotated with @" + getSimpleName() + "?");
    // Find all possible auto configuration classes, filtering duplicates
    List<String> factories = new ArrayList<>(new LinkedHashSet<>(SpringFactoriesLoader
            .loadFactoryNames(this.annotationClass, this.beanClassLoader)));
    if (factories.isEmpty() && !hasDefaultFactory()) {
        throw new IllegalStateException("Annotation @" + getSimpleName()
                + " found, but there are no implementations. Did you forget to include a starter?");
    }
    if (factories.size() > 1) {
        // there should only ever be one DiscoveryClient, but there might be more than
        // one factory
        log.warn("More than one implementation " + "of @" + getSimpleName()
                + " (now relying on @Conditionals to pick one): " + factories);
    }
    return factories.toArray(new String[factories.size()]);
}

EnableDiscoveryClientImportSelector.selectImports

首先通過注解獲取了一些屬性,然后加載了一些類名稱,我們進入loadFactoryNames 方法看看。

public static List<String> loadFactoryNames(Class<?> factoryClass, ClassLoader classLoader) {
    String factoryClassName = factoryClass.getName();
    try {
        // 注釋:public static final String FACTORIES_RESOURCE_LOCATION = "META-INF/spring.factories";
        // 注釋:這個 jar 包下的一個配置文件
        Enumeration<URL> urls = (classLoader != null ? classLoader.getResources(FACTORIES_RESOURCE_LOCATION) :
                ClassLoader.getSystemResources(FACTORIES_RESOURCE_LOCATION));
        List<String> result = new ArrayList<String>();
        while (urls.hasMoreElements()) {
            URL url = urls.nextElement();
            Properties properties = PropertiesLoaderUtils.loadProperties(new UrlResource(url));
            String factoryClassNames = properties.getProperty(factoryClassName);
  result.addAll(Arrays.asList(StringUtils.commaDelimitedListToStringArray(factoryClassNames)));
        }
        return result;
    }
    catch (IOException ex) {
        throw new IllegalArgumentException("Unable to load [" + factoryClass.getName() +
                "] factories from location [" + FACTORIES_RESOURCE_LOCATION + "]", ex);
    }
}

加載了一個配置文件,配置文件里面寫了啥呢?打開SpringFactoryImportSelector該文件所在的jar包的spring.factories文件一看。

# AutoConfiguration
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.client.CommonsClientAutoConfiguration,\
org.springframework.cloud.client.discovery.noop.NoopDiscoveryClientAutoConfiguration,\
org.springframework.cloud.client.hypermedia.CloudHypermediaAutoConfiguration,\
org.springframework.cloud.client.loadbalancer.LoadBalancerAutoConfiguration,\
org.springframework.cloud.commons.util.UtilAutoConfiguration
# Environment Post Processors
org.springframework.boot.env.EnvironmentPostProcessor=\
org.springframework.cloud.client.HostInfoEnvironmentPostProcessor

都是一些 Configuration 后綴的類名,所以這些都是加載的一堆堆的配置文件類。
factories對象里面只有一個類名路徑為 org.springframework.cloud.netflix.eureka.EurekaDiscoveryClientConfiguration 。

EurekaDiscoveryClientConfiguration

@Configuration
@EnableConfigurationProperties
@ConditionalOnClass(EurekaClientConfig.class)
@ConditionalOnProperty(value = "eureka.client.enabled", matchIfMissing = true)
@CommonsLog
public class EurekaDiscoveryClientConfiguration implements SmartLifecycle, Ordered {
    @Override
    public void start() {
        // only set the port if the nonSecurePort is 0 and this.port != 0
        if (this.port.get() != 0 && this.instanceConfig.getNonSecurePort() == 0) {
            this.instanceConfig.setNonSecurePort(this.port.get());
        }
        // only initialize if nonSecurePort is greater than 0 and it isn't already running
        // because of containerPortInitializer below
        if (!this.running.get() && this.instanceConfig.getNonSecurePort() > 0) {
            maybeInitializeClient();
            if (log.isInfoEnabled()) {
                log.info("Registering application " + this.instanceConfig.getAppname()
                        + " with eureka with status "
                        + this.instanceConfig.getInitialStatus());
            }
            this.applicationInfoManager
                    .setInstanceStatus(this.instanceConfig.getInitialStatus());
            if (this.healthCheckHandler != null) {
                this.eurekaClient.registerHealthCheck(this.healthCheckHandler);
            }
            this.context.publishEvent(
                    new InstanceRegisteredEvent<>(this, this.instanceConfig));
            this.running.set(true);
        }
    }
}
  • 首先看到該類實現了SmartLifecycle 接口,那么就肯定會實現 start 方法,而且這個 start 方法感覺應在會被加載執行的。
    this.applicationInfoManager.setInstanceStatus(this.instanceConfig.getInitialStatus()) 這段代碼有一個觀察者模式的回調存在。
// ApplicationInfoManager.setInstanceStatus 的方法
public synchronized void setInstanceStatus(InstanceStatus status) {// 打上斷點
    InstanceStatus prev = instanceInfo.setStatus(status);
    if (prev != null) {
        for (StatusChangeListener listener : listeners.values()) {
            try {
                listener.notify(new StatusChangeEvent(prev, status));
            } catch (Exception e) {
                logger.warn("failed to notify listener: {}", listener.getId(), e);
            }
        }
    }
}
  • 這個方法會因為狀態的改變而回調所有實現 StatusChangeListener 這個類的地方,前提得先注冊到 listeners 中去才行。

  • 于是乎,我們斷定,若想要回調,那么就必須有地方先注冊這個事件,而且這個注冊還必須提前執行在 start 方法前執行,于是我們得先在ApplicationInfoManager 這個類中找到注冊到 listeners 的這個方法。

public void registerStatusChangeListener(StatusChangeListener listener) {// 打上斷點
    listeners.put(listener.getId(), listener);
}
  • 于是我們逆向找下 registerStatusChangeListener 被調用的地方。

  • 很不巧的是,盡然只有1個地方被調用,這個地方就是 DiscoveryClient.initScheduledTasks 方法,而且 initScheduledTasks
    方法又是在 DiscoveryClient 的構造函數里面調用的,同時我們也對 initScheduledTasks 以及 initScheduledTasks 被調用的構造方法地方
    打上斷點。

果不其然,EurekaDiscoveryClientConfiguration.start 方法被調用了,緊接著 this.applicationInfoManager.setInstanceStatus(this.instanceConfig.getInitialStatus()) 也進入斷點,然后在往下走,又進入的
DiscoveryClient.initScheduledTasks 方法中的 notify 回調處。

  • 看著斷點依次經過我們上述分析的地方,然后也符合日志打印的順序,所以我們現在應該是有必要好好看看 DiscoveryClient.initScheduledTasks 這個方法究竟干了什么偉大的事情。然而又想了想,還不如看看 initScheduledTasks 被調用的構造方法。

DiscoveryClient 經過 @Inject 注解過的構造方法。

@Inject
DiscoveryClient(ApplicationInfoManager applicationInfoManager, EurekaClientConfig config, DiscoveryClientOptionalArgs args, Provider<BackupRegistry> backupRegistryProvider) {
    if (args != null) {
        this.healthCheckHandlerProvider = args.healthCheckHandlerProvider;
        this.healthCheckCallbackProvider = args.healthCheckCallbackProvider;
        this.eventListeners.addAll(args.getEventListeners());
    } else {
        this.healthCheckCallbackProvider = null;
        this.healthCheckHandlerProvider = null;
    }
    
    this.applicationInfoManager = applicationInfoManager;
    InstanceInfo myInfo = applicationInfoManager.getInfo();

    clientConfig = config;
    staticClientConfig = clientConfig;
    transportConfig = config.getTransportConfig();
    instanceInfo = myInfo;
    if (myInfo != null) {
        appPathIdentifier = instanceInfo.getAppName() + "/" + instanceInfo.getId();
    } else {
        logger.warn("Setting instanceInfo to a passed in null value");
    }

    this.backupRegistryProvider = backupRegistryProvider;

    this.urlRandomizer = new EndpointUtils.InstanceInfoBasedUrlRandomizer(instanceInfo);
    localRegionApps.set(new Applications());

    fetchRegistryGeneration = new AtomicLong(0);

    remoteRegionsToFetch = new AtomicReference<String>(clientConfig.fetchRegistryForRemoteRegions());
    remoteRegionsRef = new AtomicReference<>(remoteRegionsToFetch.get() == null ? null : remoteRegionsToFetch.get().split(","));

    if (config.shouldFetchRegistry()) {
        this.registryStalenessMonitor = new ThresholdLevelsMetric(this, METRIC_REGISTRY_PREFIX + "lastUpdateSec_", new long[]{15L, 30L, 60L, 120L, 240L, 480L});
    } else {
        this.registryStalenessMonitor = ThresholdLevelsMetric.NO_OP_METRIC;
    }

    if (config.shouldRegisterWithEureka()) {
        this.heartbeatStalenessMonitor = new ThresholdLevelsMetric(this, METRIC_REGISTRATION_PREFIX + "lastHeartbeatSec_", new long[]{15L, 30L, 60L, 120L, 240L, 480L});
    } else {
        this.heartbeatStalenessMonitor = ThresholdLevelsMetric.NO_OP_METRIC;
    }

    if (!config.shouldRegisterWithEureka() && !config.shouldFetchRegistry()) {
        logger.info("Client configured to neither register nor query for data.");
        scheduler = null;
        heartbeatExecutor = null;
        cacheRefreshExecutor = null;
        eurekaTransport = null;
        instanceRegionChecker = new InstanceRegionChecker(new PropertyBasedAzToRegionMapper(config), clientConfig.getRegion());

        // This is a bit of hack to allow for existing code using DiscoveryManager.getInstance()
        // to work with DI'd DiscoveryClient
        DiscoveryManager.getInstance().setDiscoveryClient(this);
        DiscoveryManager.getInstance().setEurekaClientConfig(config);

        initTimestampMs = System.currentTimeMillis();

        logger.info("Discovery Client initialized at timestamp {} with initial instances count: {}",
                initTimestampMs, this.getApplications().size());
        return;  // no need to setup up an network tasks and we are done
    }

    try {
        // 注釋:定時任務調度準備
        scheduler = Executors.newScheduledThreadPool(3,
                new ThreadFactoryBuilder()
                        .setNameFormat("DiscoveryClient-%d")
                        .setDaemon(true)
                        .build());

        // 注釋:實例化心跳定時任務線程池
        heartbeatExecutor = new ThreadPoolExecutor(
                1, clientConfig.getHeartbeatExecutorThreadPoolSize(), 0, TimeUnit.SECONDS,
                new SynchronousQueue<Runnable>(),
                new ThreadFactoryBuilder()
                        .setNameFormat("DiscoveryClient-HeartbeatExecutor-%d")
                        .setDaemon(true)
                        .build()
        );  // use direct handoff

        // 注釋:實例化緩存刷新定時任務線程池
        cacheRefreshExecutor = new ThreadPoolExecutor(
                1, clientConfig.getCacheRefreshExecutorThreadPoolSize(), 0, TimeUnit.SECONDS,
                new SynchronousQueue<Runnable>(),
                new ThreadFactoryBuilder()
                        .setNameFormat("DiscoveryClient-CacheRefreshExecutor-%d")
                        .setDaemon(true)
                        .build()
        );  // use direct handoff

        eurekaTransport = new EurekaTransport();
        scheduleServerEndpointTask(eurekaTransport, args);

        AzToRegionMapper azToRegionMapper;
        if (clientConfig.shouldUseDnsForFetchingServiceUrls()) {
            azToRegionMapper = new DNSBasedAzToRegionMapper(clientConfig);
        } else {
            azToRegionMapper = new PropertyBasedAzToRegionMapper(clientConfig);
        }
        if (null != remoteRegionsToFetch.get()) {
            azToRegionMapper.setRegionsToFetch(remoteRegionsToFetch.get().split(","));
        }
        instanceRegionChecker = new InstanceRegionChecker(azToRegionMapper, clientConfig.getRegion());
    } catch (Throwable e) {
        throw new RuntimeException("Failed to initialize DiscoveryClient!", e);
    }

    if (clientConfig.shouldFetchRegistry() && !fetchRegistry(false)) {
        fetchRegistryFromBackup();
    }

    // 注釋:初始化調度任務
    initScheduledTasks();
    try {
        Monitors.registerObject(this);
    } catch (Throwable e) {
        logger.warn("Cannot register timers", e);
    }

    // This is a bit of hack to allow for existing code using DiscoveryManager.getInstance()
    // to work with DI'd DiscoveryClient
    DiscoveryManager.getInstance().setDiscoveryClient(this);
    DiscoveryManager.getInstance().setEurekaClientConfig(config);

    initTimestampMs = System.currentTimeMillis();
    logger.info("Discovery Client initialized at timestamp {} with initial instances count: {}",
            initTimestampMs, this.getApplications().size());
}
  • 從往下看,initScheduledTasks 這個方法顧名思義就是初始化調度任務,所以這里面的內容應該就是重頭戲,進入看看。
private void initScheduledTasks() {
    if (clientConfig.shouldFetchRegistry()) {
        // registry cache refresh timer
        // 注釋:間隔多久去拉取服務注冊信息,默認時間 30秒
        int registryFetchIntervalSeconds = clientConfig.getRegistryFetchIntervalSeconds();
        int expBackOffBound = clientConfig.getCacheRefreshExecutorExponentialBackOffBound();
        // 注釋:定時任務,每間隔 30秒 去拉取一次服務注冊信息
        scheduler.schedule(
                new TimedSupervisorTask(
                        "cacheRefresh",
                        scheduler,
                        cacheRefreshExecutor,
                        registryFetchIntervalSeconds,
                        TimeUnit.SECONDS,
                        expBackOffBound,
                        new CacheRefreshThread()
                ),
                registryFetchIntervalSeconds, TimeUnit.SECONDS);
    }

    if (clientConfig.shouldRegisterWithEureka()) {
        // 注釋:間隔多久發送一次心跳續約,默認間隔時間 30 秒
        int renewalIntervalInSecs = instanceInfo.getLeaseInfo().getRenewalIntervalInSecs();
        int expBackOffBound = clientConfig.getHeartbeatExecutorExponentialBackOffBound();
        logger.info("Starting heartbeat executor: " + "renew interval is: " + renewalIntervalInSecs);

        // Heartbeat timer
        // 注釋:定時任務,每間隔 30秒 去想 EurekaServer 發送一次心跳續約
        scheduler.schedule(
                new TimedSupervisorTask(
                        "heartbeat",
                        scheduler,
                        heartbeatExecutor,
                        renewalIntervalInSecs,
                        TimeUnit.SECONDS,
                        expBackOffBound,
                        new HeartbeatThread()
                ),
                renewalIntervalInSecs, TimeUnit.SECONDS);

        // InstanceInfo replicator
        // 注釋:實例信息復制器,定時刷新dataCenterInfo數據中心信息,默認30秒
        instanceInfoReplicator = new InstanceInfoReplicator(
                this,
                instanceInfo,
                clientConfig.getInstanceInfoReplicationIntervalSeconds(),
                2); // burstSize

        // 注釋:實例化狀態變化監聽器
        statusChangeListener = new ApplicationInfoManager.StatusChangeListener() {
            @Override
            public String getId() {
                return "statusChangeListener";
            }

            @Override
            public void notify(StatusChangeEvent statusChangeEvent) {
                if (InstanceStatus.DOWN == statusChangeEvent.getStatus() ||
                        InstanceStatus.DOWN == statusChangeEvent.getPreviousStatus()) {
                    // log at warn level if DOWN was involved
                    logger.warn("Saw local status change event {}", statusChangeEvent);
                } else {
                    logger.info("Saw local status change event {}", statusChangeEvent);
                }

                // 注釋:狀態有變化的話,會回調這個方法
                instanceInfoReplicator.onDemandUpdate();
            }
        };

        // 注釋:注冊狀態變化監聽器
        if (clientConfig.shouldOnDemandUpdateStatusChange()) {
            applicationInfoManager.registerStatusChangeListener(statusChangeListener);
        }

        instanceInfoReplicator.start(clientConfig.getInitialInstanceInfoReplicationIntervalSeconds());
    } else {
        logger.info("Not registering with Eureka server per configuration");
    }
}
  • 在這個方法從上往下一路注釋分析下來,干了EurekaClient我們最想知道的一些事情,定時任務獲取注冊信息,定時任務刷新緩存,定時任務心跳續約,定時任務同步數據中心數據,狀態變化監聽回調等。但是唯獨沒看到注冊,這是怎么回事呢?

  • instanceInfoReplicator.onDemandUpdate() 就是在狀態改變的時候。

public boolean onDemandUpdate() {
    if (rateLimiter.acquire(burstSize, allowedRatePerMinute)) {
        scheduler.submit(new Runnable() {
            @Override
            public void run() {
                logger.debug("Executing on-demand update of local InstanceInfo");

                Future latestPeriodic = scheduledPeriodicRef.get();
                if (latestPeriodic != null && !latestPeriodic.isDone()) {
                    logger.debug("Canceling the latest scheduled update, it will be rescheduled at the end of on demand update");
                    latestPeriodic.cancel(false);
                }

                // 注釋:這里進行了實例信息刷新和注冊
                InstanceInfoReplicator.this.run();
            }
        });
        return true;
    } else {
        logger.warn("Ignoring onDemand update due to rate limiter");
        return false;
    }
}
  • onDemandUpdate 這個方法,唯獨 InstanceInfoReplicator.this.run() 這個方法還有點用,而且還是 run 方法呢,感情 InstanceInfoReplicator 這個類還是實現了 Runnable 接口?經過查看這個類,還真是實現了 Runnable 接口。

  • 這個方法應該我們要找的注冊所在的地方。

public void run() {
    try {
        discoveryClient.refreshInstanceInfo();
        Long dirtyTimestamp = instanceInfo.isDirtyWithTime();
        if (dirtyTimestamp != null) {
            discoveryClient.register();
            instanceInfo.unsetIsDirty(dirtyTimestamp);
        }
    } catch (Throwable t) {
        logger.warn("There was a problem with the instance info replicator", t);
    } finally {
        Future next = scheduler.schedule(this, replicationIntervalSeconds, TimeUnit.SECONDS);
        scheduledPeriodicRef.set(next);
    }
}
  • discoveryClient.register() 這個 register 方法,原來注冊方法就是這個。
boolean register() throws Throwable {
    logger.info(PREFIX + appPathIdentifier + ": registering service...");
    EurekaHttpResponse<Void> httpResponse;
    try {
        httpResponse = eurekaTransport.registrationClient.register(instanceInfo);
    } catch (Exception e) {
        logger.warn("{} - registration failed {}", PREFIX + appPathIdentifier, e.getMessage(), e);
        throw e;
    }
    if (logger.isInfoEnabled()) {
        logger.info("{} - registration status: {}", PREFIX + appPathIdentifier, httpResponse.getStatusCode());
    }
    return httpResponse.getStatusCode() == 204;
}
  • 原來調用了 EurekaHttpClient 封裝的客戶端請求對象來進行注冊的,再繼續深探 registrationClient.register 方法,于是我們來到了 AbstractJerseyEurekaHttpClient.register 方法。
@Override
public EurekaHttpResponse<Void> register(InstanceInfo info) {
    String urlPath = "apps/" + info.getAppName();
    ClientResponse response = null;
    try {
        Builder resourceBuilder = jerseyClient.resource(serviceUrl).path(urlPath).getRequestBuilder();
        addExtraHeaders(resourceBuilder);
        response = resourceBuilder
                .header("Accept-Encoding", "gzip")
                .type(MediaType.APPLICATION_JSON_TYPE)
                .accept(MediaType.APPLICATION_JSON)
                // 注釋:打包帶上當前應用的所有信息 info
                .post(ClientResponse.class, info);
        return anEurekaHttpResponse(response.getStatus()).headers(headersOf(response)).build();
    } finally {
        if (logger.isDebugEnabled()) {
            logger.debug("Jersey HTTP POST {}/{} with instance {}; statusCode={}", serviceUrl, urlPath, info.getId(),
                    response == null ? "N/A" : response.getStatus());
        }
        if (response != null) {
            response.close();
        }
    }
}
  • 調用的是 Jersey RESTful 框架來進行請求的,然后在 EurekaServer 那邊就會在 ApplicationResource.addInstance 方法接收客戶端的注冊請求,因此我們的 EurekaClient 是如何注冊的就到此為止了。
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,835評論 6 534
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,676評論 3 419
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,730評論 0 380
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,118評論 1 314
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,873評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,266評論 1 324
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,330評論 3 443
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,482評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,036評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,846評論 3 356
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,025評論 1 371
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,575評論 5 362
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,279評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,684評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,953評論 1 289
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,751評論 3 394
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,016評論 2 375

推薦閱讀更多精彩內容