一. 本文之前的工作
a berkeley view of 系列共出現過2篇,除了本文要總結的這篇,還有2009年發布的另一篇《Above the Clouds:A Berkeley View of Cloud 》,其Google scolar的引用數達到了12042。
簡單回顧下2009年 對于 云計算的技術的預測,從今天的角度回看過去,預測的還是比較準的
1. 云提供商是否可以盈利?
在偏遠地點,提供中等配置服務器,足夠規模和利用率下,是可以盈利的(省電,就近部署節省外網帶寬)
對應共有云出現的年份
aws 2006(現世界最大)
阿里云 2009 (現中國最大)
百度云 2015
騰訊云 2016
2. 企業研發用戶采用共有云是否可行,是否劃算?
結論是可行的,且是劃算的
硬件隨著時間 價格會降低。 且不同的硬件減低的速度不同
摩爾定律
google 早期的其它產品,如 gmail,why not an “unlimited” inbox?也是這一思想的產物。梅姐說,我們堅信摩爾定律,所以我們大膽地做了這個嘗試:gmail 被口口相傳,每個人都等待著自己有幸被邀請,而一般的用戶頭一兩年用不了多少存儲,等累積的數據多起來時,每 GB 存儲的價格早已掉了個量級。所以你看,當觀念轉變,想別人不敢想之事時,思路就開闊許多,做事的路子陡然不同,進而成本結構也完全不一樣。當 4M 以下免費,4M 以上收費的 yahoo 郵箱發現用戶像潮水般涌入 gmail,急忙跟進時,卻發現,自己用的 IOE 體系,成本結構根本無法競爭,這就尷尬了:是硬著頭皮流血跟進,還是壯士斷腕,重建系統?
3. 云計算需要解決的問題 challenges
1)用戶的程序要適應虛擬機(性能損失:cpu ram 4% ,disk io 26%)
應用服務和存儲服務分離提供
2)要支持快速的啟動和停止
虛擬機要求盡可能小
3)怎樣更高可用(分布化,微服務化)
微服務化,且跨區部署
4)防止云鎖定
企業用戶會嘗試部署在多個云,靈活遷移
5)隱私數據訪問和保護
數據可以同步到私有云
二. 本文背景介紹
1. 作者以及實驗室介紹
1)作者們(14位)大部分是學界和工業界的牛人,startup創始人
https://people.eecs.berkeley.edu/~istoica/
https://people.eecs.berkeley.edu/~dawnsong/
http://people.eecs.berkeley.edu/~jordan/
2)作者從事多個方向的研究
分布式系統,AI,安全,統計學算法,網絡, 數據庫系統,嵌入式系統
三. 主要討論的問題
4種應用場景以及對應的 9個待解決的技術問題
1. AI4個應用上的趨勢
1)Mission-critical AI
協助人完成特定任務,且有可能比人完成的更好:手術機器人,自動駕駛,掃地機器人
Challenges: Design AI systems that learn continually by interact-ing with a dynamic environment, while making decisions that aretimely, robust, and secure.
2)Personalized AI
通過收集用戶的數據,提供更個性化的AI服務,如:個人助理(助理來也),iphone X的個性化的人臉開鎖,不同駕駛風格的自動駕駛系統
Challenges: Design AI systems that enable personalized applica-tions and services yet do not compromise users’ privacy and security.
3)AI across organizations
在保護數據歸屬的前提下:共享訓練的數據,如:醫院和銀行行業(其行業之間有競爭關系)
Challenges: Design AI systems that can train on datasets owned by diferent organizations without compromising their condentiality, and in the process provide AI capabilities that span the boundaries of potentially competing organization.
4)AI demands outpacing the Moore’s Law
預期2018年有400ZB(1ZB=102410241024*1024GB)數據產生,到2025年還會有指數級增加。 摩爾定律已經失效,不能滿足AI需求.無論從計算能力,存儲能力還是網絡能力
Challenges: Develop domain-specic architectures and software systems to address the performance needs of future AI applicationsin the post-Moore’s Law era, including custom chips for AI work-loads, edge-cloud systems to eciently process data at the edge, and techniques for abstracting and sampling data.
2. 9個待解決的技術問題
Acting in dynamic environments
巡邏機器人的例子
consider a group of robots providing security for an building. When one robot breaks or a new one is added, the other robots must update their strategies for navigation, planning, and control in a coordinatedmanner.
Similarly, when the environment changes, either due to the robots’ own actions or to external conditions (e.g., an elevator goingout of service, or a malicious intruder), all robots must re-calibratetheir actions in light of the change.
R1: Continual learning.
現在訓練模型的方式 離線訓練 -> 優化 ->在線預測。 最高的時效性也需要幾小時級別
為了提升適應性,會引進更自動的pipeline,這就會帶來后面所說的安全問題
online learning,在線訓練更新模型
RL預期是方向,在模擬的環境中充分訓練,但是系統上需要很多優化
requiring millions or even billions of simulations to explore the solution space and “solve"complex tasks. 現在還沒有合適的系統
Simulated reality (SR).
SR enables an agent to learn not only much faster but also much more safely.
Consider a robot cleaning an environment that encoun-ters an object it has not seen before, e.g., a new cellphone. The robotcould physically experiment with the cellphone to determine howto grasp it, but this may require a long time and might damage thephone. In contrast, the robot could scan the 3D shape of the phone into a simulator, perform a few physical experiments to determinerigidity, texture, and weight distribution, and then use SR to learnhow to successfully grasp it without damage.
在 Apollo 1.5 模擬系統上要花 30 分鐘進行的測試任務,在優化后的模擬系統上測試只需要 30 秒?!? —baidu王京傲 ces 2018
待解決的技術點:
(1) Build systems for RL that fully exploit parallelism,while allowing dynamic task graphs, providing milli second-level latencies, and running on heterogeneous hardware under stringent deadlines.
(2)Build systems that can faithfully simulate the real-worldenvironment, as the environment changes continually and unexpect-edly, and run faster than real time.
R2: Robust decisions.
一個例子
the Microsoft Tay chat bot relied heavily on human interaction to develop rich naturaldialogue capabilities. However, when exposed to Twitter messages, Tay quickly took on a dark personality
如果已經上線了在線學習,如果遇到負面的數據或者非常不確定的數據。AI系統應該不做決策操作或者只做預定的保險操作。(比如:自動駕駛的減速停車)
待解決的技術點:
(1) Build fine grained provenance support into AI systems to connect outcome changes (e.g., reward or state) to the data sources that caused these changes, and automatically learn causal,source-specic noise models.
(2) Design API and language support for developing systems that maintain condence intervals for decision-making, and in particular can process unforeseen inputs.
R3: Explainable decisions.
尤其在醫療AI領域
輸入數據的哪些部分導致了結論
For example, one may wish to know what features of a particular or-gan in an X-ray (e.g., size, color, position, form) led to a particulardiagnosis and how the diagnosis would change under minor pertur-bations of those features.
待解決的技術點:
(1) Build AI systems that can support interactive diagnostic analysis, that faithfully replay past executions, and that can help to determine the features of the input that are responsible for a particular decision, possibly by replaying the decision task against past perturbed inputs. More generally, provide systems support for causal inference.
Secure AI
直接攻擊,掌握系統
tensorfow 披露漏洞
“這個漏洞出問題的點是在處理 AI 模型的時候,一個攻擊場景是,黑客在網上提供一個AI 模型給大家用,大家下載回來一運行就中招了?;蛘吆诳湍軌蚩刂颇硞€系統的 AI 模型就能實施攻擊。所以,使用 TensorFlow 的系統要注意不要使用有問題/被黑客修改過的 AI 模型。
目前已知的公開發現 AI 框架漏洞有兩個: 一個是之前 360 發現的三個 AI 框架引入的第三方組件帶來的漏洞,另一個是此次我們發現的框架本身的漏洞”
R4: Secure enclaves.
例如:在公有云等集群部署是,在代碼運行時上的隔離,隔離區的代碼可以訪問到數據,其他進程訪問不到隔離區運行的代碼,硬件上執行。實際使用建議 將代碼分為保密區和非保密區,運行在不同的runtime。
Intel’sSoftware Guard Extensions (SGX) [5], which provides a hardware-enforced isolated execution environment. Code inside SGX cancompute on data, while even a compromised operating system orhypervisor (running outside the enclave) cannot see this code or data. SGX also provides remote attestation [6], a protocol enabling aremote client to verify that the enclave is running the expected code.
待解決的技術點:
(1)Build AI systems that leverage secure enclaves to ensure data con dentiality, user privacy and decision integrity, possibly by splitting the AI system’s code between a minimal code base runningwithin the enclave, and code running outside the enclave. Ensure thecode inside the enclave does not leak information, or compromisedecision integrity.
R5: Adversarial learning.
evasion attacks
inference 階段: 修改圖像導致錯誤的分類
現階段沒有什么好辦法data poisoning attacks
train階段:混入錯誤label的數據到訓練數據集,尤其在AI系統持續學習的前提下,未授信的train data 更容易導致錯誤
可以利用回放和可解釋性剔除部分影響數據
待解決的技術點:
(1) Build AI systems that are robust against adversarialinputs both during training and prediction (e.g., decision making),possibly by designing new machine learning models and network architectures, leveraging provenance to track down fraudulent datasources, and replaying to redo decisions after eliminating the fraudu-lent sources.
R6: Shared learning on condential data.
示例:既是競爭又是合作
銀行共享防欺詐的模型和數據
醫院共享流感識別的數據和模型
訓練模型保證數據的私密性
一個方法是全部使用R4中所說的安全隔離的硬件環境
另一個方法是使用特殊的算法,但是對train性能影響比較大
multi-party com-putation (MPC) :多個團體 共同完成一個計算:
(1) local computation and 本地 算梯度
(2) computation using MPC 合并梯度
待解決的技術點:
Build AI systems that (1) can learn across multipledata sources without leaking information from a data source duringtraining or serving, and (2) provide incentives to potentially competing organizations to share their data or models.
AI-specic architectures
R7: Domain specic hardware.
摩爾定律失效,并且AI對于計算,對于內存訪問的需求更強
對于CPU的更新:
TPU, FPGA
對于DRAM和SSD的更新:
3D XPoint from Intel and Micron aims to provide 10? storagecapacity with DRAM-like performance. (更牛的內存)
STT MRAM aims to succeed Flash, which may hit similar scaling limits as DRAM. (更牛的ssd)
服務器的配置 會多種多樣,更加異構
架構設計參考:
https://bar.eecs.berkeley.edu/projects/2015-firebox.html
待解決的技術點:
(1) Design domain-specic hardware architectures to improve the performance and reduce power consumption of AI ap-plications by orders of magnitude, or enhance the security of theseapplications. (多,省電,安全)
(2) Design AI software systems to take advantage of these domain-specic architectures, resource disaggregation architectures, and future non-volatile storage technologies.(調度更多異構硬件的系統)
R8: Composable AI systems
- Model composition
模塊化,復用的重要性:類比現在的微服務架構
預期未來AI系統也會有分層的api服務
組合的方式:比如我們的設計:一次檢測多次識別分類
準確度從低到高排序的模型序列,串行查詢 (平衡延遲和準確度)
小模型在終端,大模型在云端
待解決的技術點:
(1) designing a declarative language to capture the topology of these components and specifying performance targets of the applications,
(2) providing accurate performance models for each component, including resource demands, latency and throughput, and
(3) scheduling and optimization algorithms to compute the execution plan across components, and map components to the available resources to satisfy latency and throughput requirements while minimizing costs.
類似于sql 查詢 解析器的工作,充分利用資源,batch with configurable latency controls.
參考架構(服務器端):
tensorflow serving
clipper
- Action composition
把更細粒度的操作組合成高級別的option
更高級別的option,較少選擇數目,更快的訓練速度
示例:
比如自動駕駛,抽象出來的option:換車道線 = ( 加速 or 減速 左轉 or 右轉 打變道信號燈)
待解決的技術點:
(1) Design AI systems and APIs that allow the composition of models and actions in a modular and exible manner, and develop rich libraries of models and options using these APIs to dramatically simplify the development of AI applications
R9: Cloud-edge systems
終端的優勢
edge devices to improve security, privacy, latency and safety
技術上的困難
適配多種終端和軟件系統的難度
compilers and just-in-time (JIT) technologies to eciently compile on-the-fly complex algorithms and run them on edge devices. This approach can leverage recent code generation tools, such as TensorFlow’s XLA [107], Halide [50], and Weld [83].
終端小模型 云端大模型 已經應用于video識別系統,負載需要靈活的在終端和云端切換
終端模型:小,準確度低,更新頻率低
云上模型:大,準確度高,更新頻率高
即便是有了5g和強大的云端,從網絡和存儲的能力和成本考慮,我們都不能全部存儲設備產生的數據.所以需要對端上的數據進行samples and sketches(上傳統計數據 和 抽樣存儲)
待解決的技術點:
Design cloud-edge AI systems that
(1) leverage the edge to reduce latency, improve safety and security, and implement intelligent data retention techniques,
(2) leverage the cloud to share data and models across edge devices, train sophisticated computation-intensive models, and take high quality decisions.