AmbariAgent-啟動、Controller主線邏輯分析

AmbariAgent進程是由AmbariAgent.py腳本啟動的,AmbariAgent.py腳本又會啟動main.py腳本,開始真正的處理邏輯。

def main():
  global status

  if (os.environ.has_key("PYTHON")):
    PYTHON = os.environ["PYTHON"]
  else:
    print("Key 'PYTHON' is not defined in environment variables")
    sys.exit(1)

  args = list(sys.argv)
  del args[0]

  mergedArgs = [PYTHON, AGENT_SCRIPT] + args

  # Become a parent for all subprocesses
  os.setpgrp()

  try:
    while status == AGENT_AUTO_RESTART_EXIT_CODE:
      # main.py進程,這樣有助于重啟main.py操作
      mainProcess = subprocess.Popen(mergedArgs)
      mainProcess.communicate()
      status = mainProcess.returncode
      if os.path.isfile(AGENT_PID_FILE) and status == AGENT_AUTO_RESTART_EXIT_CODE:
        os.remove(AGENT_PID_FILE)
  finally:
    os.killpg(0, signal.SIGKILL)

在agent機器上會看到兩個進程:

root     28527  0.0  0.0  70368 20644 pts/1    S    Oct10   0:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start
root     28533  1.3  0.1 1550348 33132 pts/1   Sl   Oct10 137:15 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start

main.py 程序啟動入口

main.py中啟動Controller控制器,Controller是Agent的核心處理類,負責向Server注冊、同Server保持心跳并收集Server命令等功能:

def run_threads(server_hostname, heartbeat_stop_callback):
  # Launch Controller communication
  
  controller = Controller(config, server_hostname, heartbeat_stop_callback)
  controller.start()
  time.sleep(2) # in order to get controller.statusCommandsExecutor initialized
  while controller.is_alive():
    time.sleep(0.1)

    need_relaunch, reason = controller.get_status_commands_executor().need_relaunch
    if need_relaunch:
      controller.get_status_commands_executor().relaunch(reason)

  controller.get_status_commands_executor().kill("AGENT_STOPPED", can_relaunch=False)

Controller控制器

Controller線程的執行邏輯,agent同server之間是通過pull的模式進行的,agent定期向server發送http請求保持心跳,并通過獲取、解析server返回的結果,處理命令。

def run(self):
    try:
      # 創建ActiveQueue
      self.actionQueue = ActionQueue(self.config, controller=self)
      # 如果設置了多進程執行命令的方式,則啟動多進程,否則單進程內執行
      if self.config.get_multiprocess_status_commands_executor_enabled():
        self.statusCommandsExecutor = MultiProcessStatusCommandsExecutor(self.config, self.actionQueue)
      else:
        self.statusCommandsExecutor = SingleProcessStatusCommandsExecutor(self.config, self.actionQueue)
      # 任務退出協助器
      ExitHelper().register(self.statusCommandsExecutor.kill, "CLEANUP_KILLING", can_relaunch=False)
      # 啟動ActiveQueue
      self.actionQueue.start()
      # 注冊器
      self.register = Register(self.config)
      # 心跳器
      self.heartbeat = Heartbeat(self.actionQueue, self.config, self.alert_scheduler_handler.collector())
 
      opener = urllib2.build_opener()
      urllib2.install_opener(opener)
      
      # 該線程持續運行
      while True:
        self.repeatRegistration = False
        # 向Server注冊并執行心跳
        self.registerAndHeartbeat()
        if not self.repeatRegistration:
          logger.info("Finished heartbeating and registering cycle")
          break
    except:
      logger.exception("Controller thread failed with exception:")
      raise

    logger.info("Controller thread has successfully finished")

Controller每次同server交互的方法,registerAndHeartbeat方法:

def registerAndHeartbeat(self):
    # 首次啟動或者ambari-server重啟的時候,會觸發agent向server的注冊邏輯
    registerResponse = self.registerWithServer()

    if "response" in registerResponse:
      message = registerResponse["response"]
      logger.info("Registration response from %s was %s", self.serverHostname, message)

      if self.isRegistered:
        # Clearing command queue to stop executing "stale" commands
        # after registration
        logger.info('Resetting ActionQueue...')
        self.actionQueue.reset()

        # Process callbacks
        for callback in self.registration_listeners:
          callback()
        
        # sleep 心跳間隔
        time.sleep(self.netutil.HEARTBEAT_IDLE_INTERVAL_DEFAULT_MAX_SEC)
        # 核心處理函數,開始同Server發起心跳
        self.heartbeatWithServer()
      else:
        logger.info("Registration response from %s didn't contain 'response' as a key".format(self.serverHostname))

registerWithServer()方法負責agent向server注冊邏輯,如果已經注冊了會略過:

#
def registerWithServer(self):
    """
    :return: returning from current method without setting self.isRegistered
    to True will lead to agent termination.
    """
    LiveStatus.SERVICES = []
    LiveStatus.CLIENT_COMPONENTS = []
    LiveStatus.COMPONENTS = []
    ret = {}
    
    # 如果當前agent沒有被注冊,則發送注冊請求
    while not self.isRegistered:
      try:
        data = json.dumps(self.register.build(self.version))
        prettyData = pprint.pformat(data)

        try:
          server_ip = socket.gethostbyname(self.hostname)
          logger.info("Registering with %s (%s) (agent=%s)", self.hostname, server_ip, prettyData)
        except socket.error:
          logger.warn("Unable to determine the IP address of '%s', agent registration may fail (agent=%s)",
                      self.hostname, prettyData)
        
        # 向Server發起注冊請求
        ret = self.sendRequest(self.registerUrl, data)
        prettyData = pprint.pformat(ret)
        logger.debug("Registration response is %s", prettyData)

        # exitstatus is a code of error which was raised on server side.
        # exitstatus = 0 (OK - Default)
        # exitstatus = 1 (Registration failed because different version of agent and server)
        exitstatus = 0
        if 'exitstatus' in ret.keys():
          exitstatus = int(ret['exitstatus'])
          
        # 如果Server返回狀態為1,則注冊失敗,有可能是版本不一致、網絡不通等導致。
        if exitstatus == 1:
          # log - message, which will be printed to agents log
          if 'log' in ret.keys():
            log = ret['log']
            logger.error(log)
          self.isRegistered = False
          self.repeatRegistration = False
          return ret

        self.responseId = int(ret['responseId'])
        logger.info("Registration Successful (response id = %s)", self.responseId)

        self.isRegistered = True
        
        # 如果agent注冊Server注冊成功,則首先更新各種cache,包括集群配置、恢復管理、Agent配置等(此處就是為什么cluster重啟完畢后,配置會自動加載至agent的邏輯!!!)
        # always update cached cluster configurations on registration
        # must be prior to any other operation
        self.cluster_configuration.update_configurations_from_heartbeat(ret)
        self.recovery_manager.update_configuration_from_registration(ret)
        self.config.update_configuration_from_registration(ret)
        logger.debug("Updated config:" + str(self.config))
        
        # 變更statusCommandExecutor中的狀態
        # Start StatusCommandExecutor child process or restart it if already running
        # in order to receive up to date agent config.
        self.statusCommandsExecutor.relaunch("REGISTER_WITH_SERVER")
        
        # 如果這次心跳中包含命令
        if 'statusCommands' in ret.keys():
          logger.debug("Got status commands on registration.")
          # 將該命令加至status隊列中
          self.addToStatusQueue(ret['statusCommands'])
        else:
          self.hasMappedComponents = False

        # always update alert definitions on registration
        # 更新alert中
        self.alert_scheduler_handler.update_definitions(ret)
      except ssl.SSLError:
        self.repeatRegistration = False
        self.isRegistered = False
        return
      except Exception, ex:
        # try a reconnect only after a certain amount of random time
        delay = randint(0, self.max_reconnect_retry_delay)
        logger.error("Unable to connect to: " + self.registerUrl, exc_info=True)
        logger.error("Error:" + str(ex))
        logger.warn(""" Sleeping for {0} seconds and then trying again """.format(delay,))
        time.sleep(delay)

    return ret

heartbeatWithServer()是agent同server保持心跳的邏輯,為controller的主要執行邏輯:

def heartbeatWithServer(self):
    self.DEBUG_HEARTBEAT_RETRIES = 0
    self.DEBUG_SUCCESSFULL_HEARTBEATS = 0
    retry = False
    certVerifFailed = False
    state_interval = int(self.config.get('heartbeat', 'state_interval_seconds', '60'))

    # last time when state was successfully sent to server
    last_state_timestamp = 0.0
    
    # 此處是保障日志輸出不要太多
    # in order to be able to check from logs that heartbeats processing
    # still running we log a message. However to avoid generating too
    # much log when the heartbeat runs at a higher rate (e.g. 1 second intervals)
    # we log the message at the same interval as 'state interval'
    heartbeat_running_msg_timestamp = 0.0

    # Prevent excessive logging by logging only at specific intervals
    getrecoverycommands_timestamp = 0.0
    getrecoverycommands_interval = self.netutil.HEARTBEAT_IDLE_INTERVAL_DEFAULT_MAX_SEC

    heartbeat_interval = self.netutil.HEARTBEAT_IDLE_INTERVAL_DEFAULT_MAX_SEC

    while not self.DEBUG_STOP_HEARTBEATING:
      current_time = time.time()
      logging_level = logging.DEBUG
      if current_time - heartbeat_running_msg_timestamp > state_interval:
        # log more steps every minute or so
        logging_level = logging.INFO
        heartbeat_running_msg_timestamp = current_time

      try:
        logger.log(logging_level, "Heartbeat (response id = %s) with server is running...", self.responseId)

        send_state = False
        if not retry:
          if current_time - last_state_timestamp > state_interval:
            send_state = True

          logger.log(logging_level, "Building heartbeat message")

          data = json.dumps(self.heartbeat.build(self.responseId, send_state, self.hasMappedComponents))
        else:
          self.DEBUG_HEARTBEAT_RETRIES += 1

        if logger.isEnabledFor(logging.DEBUG):
          logger.log(logging_level, "Sending Heartbeat (id = %s): %s", self.responseId, data)
        else:
          logger.log(logging_level, "Sending Heartbeat (id = %s)", self.responseId)
        
        # 發送心跳 請求
        response = self.sendRequest(self.heartbeatUrl, data)
        exitStatus = 0
        if 'exitstatus' in response.keys():
          exitStatus = int(response['exitstatus'])
        
        # 如果發送請求失敗,拋出異常
        if exitStatus != 0:
          raise Exception(response)

        serverId = int(response['responseId'])

        logger.log(logging_level, 'Heartbeat response received (id = %s)', serverId)

        cluster_size = int(response['clusterSize']) if 'clusterSize' in response.keys() else -1

        # TODO: this needs to be revised if hosts can be shared across multiple clusters
        heartbeat_interval = self.get_heartbeat_interval(cluster_size) \
          if cluster_size > 0 \
          else self.netutil.HEARTBEAT_IDLE_INTERVAL_DEFAULT_MAX_SEC

        logger.log(logging_level, "Heartbeat interval is %s seconds", heartbeat_interval)

        if 'hasMappedComponents' in response.keys():
          self.hasMappedComponents = response['hasMappedComponents'] is not False
        
        # 添加有等待任務
        if 'hasPendingTasks' in response.keys():
          has_pending_tasks = bool(response['hasPendingTasks'])
          self.recovery_manager.set_paused(has_pending_tasks)
        
        # register命令: 重新注冊
        if 'registrationCommand' in response.keys():
          # check if the registration command is None. If none skip
          if response['registrationCommand'] is not None:
            logger.info("RegistrationCommand received - repeat agent registration")
            self.isRegistered = False
            self.repeatRegistration = True
            return
            
        # 處理agent有可能的內存泄露問題,如果超設定內存,則重啟agent
        used_ram = get_used_ram()/1000
        # dealing with a possible memory leaks
        if self.max_ram_soft and used_ram >= self.max_ram_soft and not self.actionQueue.tasks_in_progress_or_pending():
          logger.error(AGENT_RAM_OVERUSE_MESSAGE.format(used_ram=used_ram, config_name="memory_threshold_soft_mb", max_ram=self.max_ram_soft))
          self.restartAgent()
        if self.max_ram_hard and used_ram >= self.max_ram_hard:
          logger.error(AGENT_RAM_OVERUSE_MESSAGE.format(used_ram=used_ram, config_name="memory_threshold_hard_mb", max_ram=self.max_ram_hard))
          self.restartAgent()

        if serverId != self.responseId + 1:
          logger.error("Error in responseId sequence - restarting")
          self.restartAgent()
        else:
          self.responseId = serverId
          if send_state:
            last_state_timestamp = current_time

        # if the response contains configurations, update the in-memory and
        # disk-based configuration cache (execution and alert commands have this)
        # 如果返回中包含配置,則更新內存中和磁盤上的配置(執行或者 報警會用到)
        logger.log(logging_level, "Updating configurations from heartbeat")
        self.cluster_configuration.update_configurations_from_heartbeat(response)

        response_keys = response.keys()

        # there's case when canceled task can be processed in Action Queue.execute before adding rescheduled task to queue
        # this can cause command failure instead result suppression
        # so canceling and putting rescheduled commands should be executed atomically
        # 取消命令: 鎖定activeQueue, 取消該命令
        # 執行命令: 執行該命令
        if 'cancelCommands' in response_keys or 'executionCommands' in response_keys:
          logger.log(logging_level, "Adding cancel/execution commands")
        with self.actionQueue.lock:
          if 'cancelCommands' in response_keys:
            self.cancelCommandInQueue(response['cancelCommands'])

          if 'executionCommands' in response_keys:
            execution_commands = response['executionCommands']
            self.recovery_manager.process_execution_commands(execution_commands)
            self.addToQueue(execution_commands)
        
        # status命令: 添加至status隊列
        if 'statusCommands' in response_keys:
          # try storing execution command details and desired state
          self.addToStatusQueue(response['statusCommands'])

        if current_time - getrecoverycommands_timestamp > getrecoverycommands_interval:
          getrecoverycommands_timestamp = current_time
          if not self.actionQueue.tasks_in_progress_or_pending():
            logger.log(logging_level, "Adding recovery commands")
            recovery_commands = self.recovery_manager.get_recovery_commands()
            for recovery_command in recovery_commands:
              logger.info("Adding recovery command %s for component %s",
                          recovery_command['roleCommand'], recovery_command['role'])
              self.addToQueue([recovery_command])

        if 'alertDefinitionCommands' in response_keys:
          logger.log(logging_level, "Updating alert definitions")
          self.alert_scheduler_handler.update_definitions(response)

        if 'alertExecutionCommands' in response_keys:
          logger.log(logging_level, "Executing alert commands")
          self.alert_scheduler_handler.execute_alert(response['alertExecutionCommands'])

        if "true" == response['restartAgent']:
          logger.error("Received the restartAgent command")
          self.restartAgent()
        else:
          logger.debug("No commands sent from %s", self.serverHostname)

        if retry:
          logger.info("Reconnected to %s", self.heartbeatUrl)

        if "recoveryConfig" in response:
          # update the list of components enabled for recovery
          logger.log(logging_level, "Updating recovery config")
          self.recovery_manager.update_configuration_from_registration(response)

        retry = False
        certVerifFailed = False
        self.DEBUG_SUCCESSFULL_HEARTBEATS += 1
        self.DEBUG_HEARTBEAT_RETRIES = 0
        self.heartbeat_stop_callback.reset_heartbeat()
      except ssl.SSLError:
        self.repeatRegistration=False
        self.isRegistered = False
        logger.exception("SSLError while trying to heartbeat.")
        return
      except Exception, err:
        if "code" in err:
          logger.error(err.code)
        else:
          logException = False
          if logger.isEnabledFor(logging.DEBUG):
            logException = True

          exceptionMessage = str(err)
          errorMessage = "Unable to reconnect to {0} (attempts={1}, details={2})".format(self.heartbeatUrl, self.DEBUG_HEARTBEAT_RETRIES, exceptionMessage)

          if not retry:
            errorMessage = "Connection to {0} was lost (details={1})".format(self.serverHostname, exceptionMessage)

          logger.error(errorMessage, exc_info=logException)

          if 'certificate verify failed' in str(err) and not certVerifFailed:
            logger.warn("Server certificate verify failed. Did you regenerate server certificate?")
            certVerifFailed = True

        self.cachedconnect = None  # Previous connection is broken now
        retry = True

        #randomize the heartbeat
        delay = randint(0, self.max_reconnect_retry_delay)
        time.sleep(delay)

      # Sleep for some time
      timeout = heartbeat_interval - self.netutil.MINIMUM_INTERVAL_BETWEEN_HEARTBEATS
      logger.log(logging_level, "Waiting %s for next heartbeat", timeout)

      if 0 == self.heartbeat_stop_callback.wait(timeout, self.netutil.MINIMUM_INTERVAL_BETWEEN_HEARTBEATS):
        # Stop loop when stop event received
        logger.info("Stop event received")
        self.DEBUG_STOP_HEARTBEATING=True

      logger.log(logging_level, "Wait for next heartbeat over")
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 229,732評論 6 539
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 99,214評論 3 426
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 177,781評論 0 382
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,588評論 1 316
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 72,315評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,699評論 1 327
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,698評論 3 446
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,882評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,441評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 41,189評論 3 356
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,388評論 1 372
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,933評論 5 363
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,613評論 3 348
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,023評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,310評論 1 293
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 52,112評論 3 398
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,334評論 2 377

推薦閱讀更多精彩內容

  • Spring Cloud為開發人員提供了快速構建分布式系統中一些常見模式的工具(例如配置管理,服務發現,斷路器,智...
    卡卡羅2017閱讀 134,816評論 18 139
  • 22年12月更新:個人網站關停,如果仍舊對舊教程有興趣參考 Github 的markdown內容[https://...
    tangyefei閱讀 35,204評論 22 257
  • Spring Boot 參考指南 介紹 轉載自:https://www.gitbook.com/book/qbgb...
    毛宇鵬閱讀 46,911評論 6 342
  • jHipster - 微服務搭建 CC_簡書[http://www.lxweimin.com/u/be0d56c4...
    quanjj閱讀 826評論 0 2
  • 第一章 OpenStack基礎 OpenStack管理的資源及提供的服務OpenStack做為一個操作系統,...
    sgt_tiger閱讀 12,980評論 4 72