ParlAI是 Facebook 開源的一個可用于在多種開放可用的對話數據集上訓練和評估人工智能模型的框架。一個統一的分享、訓練和評估對話模型的平臺,支持各種對話任務。
對應的論文為:Recipes for building an open-domain chatbot
下面簡單概要下本篇論文內容:
Abstract
Building open-domain chatbots is a challenging area for machine learning research.
構建開放域的聊天機器人是機器學習領域研究中一個很挑戰性的方向。
While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot.
盡管之前的工作已經表明擴大神經模型的參數數量和訓練數據的大小可以獲得更好的結果,不過對于高性能聊天機器人來說,我們展現了其他因素的重要性。
Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona.
良好的對話需要一系列專業的談話者以無縫的方式融合的技能:提供引人入勝的談話要點;傾聽他們的伴侶;適當地展示知識、同理心和個性;同時保持個性的一致。
We show that large scale models can learn these skills when given appropriate training data and? choice of generation strategy.
我們zan'shi當給定適當的訓練數據和生成策略選擇時,大型模型可以學習這些技能。
We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements.
我們用90M、2.7B和9.4B不同大小參數的模型來構建我們的方案,并公開了模型和代碼。人工評估表明在多輪對話中,我們的最優模型在投入度和人性測量方面優于現有方法。
We then discuss the limitations of this work by analyzing failure cases of our models.
最后我們通過分析我們模型的失敗案例來討論這項工作的局限性。
In this work, we provide recipes for building open domain chatbots that perform well in human evaluations.
在這項工作中,我們提供了構建在人工評估中表現良好的開放域聊天機器人的方法。
It has been shown across the field of NLP (Devlin et al., 2019) and in conversational agents in particular (Dinan et al., 2020; Zhang et al., 2019; Adiwardana et al., 2020) that pre-training on large corpora is important. Beyond simply scaling models the two main takeaways from our study are:
NLP領域(Devlin等人,2019年)和對話機器人領域(Dinan等人,2020年;Zhang等人,2019年;Adiwardana等人,2020年)都表明,大型語料庫的預訓練非常重要。除了簡單的縮放模型參數大小之外,我們研究的兩個主要收獲是:
? ??1. Blending Skills 技能混合
? ???????Large improvements can be made by finetuning on data that emphasizes desirable conversational skills.
? ????通過加強強指向性的會話技能數據進行微調可以取得很大的改進。
? ??????We select tasks that make the model focus on personality and engagingness, knowledge, and empathy, achieving large gains by using the recently introduced Blended Skill Talk (BST) set-up (Smith et al., 2020), which targets those aspects by providing training data and initial conversational context (personas and topics).
? ??? ??我們選擇了一些任務,這些任務使模型側重于個性和積極性、知識和同理心,通過使用最近引入的混合技能談話(BST)設置(Smith et al.,2020)實現了巨大的收益,該配置通過提供訓練數據和初始對話上下文(人物角色和主題)來針對這些方面優化。
? ??????Small models using BST can match or outperform larger models that do not. While BST emphasizes desirable traits, we also show this tuning can minimize undesirable traits learnt from large corpora, such as toxicity.
? ??????使用BST的小型模型可以達到或優于不使用BST的大型模型。雖然BST強調可取的特征,但我們也表明,這種調整可以最小化從大型語料庫中學習到的不可取的特征,例如毒性。
? ??2. Generation Strategies 生成策略
? ??????The choice of decoding algorithm is of critical importance, and two models with the same perplexity but different decoding algorithms can give vastly different results.
? ??????解碼算法的選擇至關重要,兩個具有相同的編碼但不同解碼算法的模型會給出截然不同的結果。
? ??????In particular we show that the length of the bot’s utterances are crucial to human judgments of quality – too short and the responses are seen as dull or showing a lack of interest, too long and the? bot appears to waffle and not listen.
? ??????我們特別強調,機器人說話的長度對人類對質量的判斷至關重要——時間太短,反應被視為枯燥或缺乏興趣;時間太長,機器人似乎在胡說八道并且不會傾聽。
? ??????We show, contrary to previous work which reports that beam search is inferior to sampling (Holtzman et al., 2019; Adiwardana et al., 2020), that careful choice of search hyperparameters can give strong results by controlling trade-offs.
? ??????我們發現,與之前工作中搜索不如采樣(Holtzman等人,2019年;Adiwardana等人,2020年)的結論相反,仔細選擇搜索超參數可以通過控制權重得到比較好的結果。
? ??????In particular, constraining the minimum beam length gives a crucial control of the dull versus spicy spectrum of responses.
? ??????特別是,限制最小播報長度是一個關鍵點的控制點來控制反饋內容的枯燥和粗俗。
Human evaluation results are highly dependent on the precise set-up one chooses.Model performance can be strongly affected by the specific instructions given to evaluators, such as a given topic or not, the overall conversation length, and the choice of human interlocutors, which may be difficult to jointly account for.We report performance when employing crowdworkers in short multi-turn conversations with no prompt.
人工評估的結果高度依賴于人們選擇什么樣的精確率定義。模型性能可能會受到給評估者的具體指示的強烈影響,例如給定的主題與否、整體對話長度以及對話者的選擇,這可能很難共同解釋。本文中我們報告了將評測人員在多輪短對話和無提示的情況下進行評估結果。
However, in addition to that, we believe releasing models is the most reliable way to enable full insight into their capabilities.We thus make publicly available our large-scale, state of the art open-domain conversational agent, including code to fine-tune it, the model weights, and code to evaluate it, so that our setup is reproducible.
除此之外,我們相信發布模型是全面了解其功能的最可靠方式。所以,我們公開了我們的大規模、最新的開放域對話機器人,包括微調代碼、模型參數和評估代碼,以便我們的步驟是可復制的。
In human evaluations of engagingness our best model outperforms Meena (Adiwardana et al., 2020) in a pairwise comparison 75% to 25%, and in terms of humanness by 65% to 35% (both statistically significant, two-tailed binomial test, p < 0:01).
在人類對融入度的評估中,我們的最佳模型在成對比較中表現優于Meena(Adiwardana et al.,2020),在人性方面表現優于Meena 25%到75%?,在人性方面表現優于Meena 35%到65%?(均顯著,雙尾二項檢驗,p<0:01)。
While the performance of our bot at first sight is very good, we do not believe we are yet close to solving the problem of open-domain conversation.We thus discuss limitations of our models, and initial attempts to solve them. In particular, our models still display: a lack of in-depth knowledge if sufficiently interrogated; a tendency to stick to simpler language; and a tendency to repeat oftused phrases.
雖然我們的機器人初步看性能較好,但我們認為我們還沒有接近解決開放域對話的問題。因此,我們討論了模型的局限性,以及解決這些局限性的初步嘗試。在深度詢問的時候我們的模型還無法深入理解知識;傾向于使用更簡單的語言;還有重復常用短語的傾向。
We show how unlikelihood training and retrieve-and-refine mechanisms are potential avenues for fixing these problems; however, our initial experiments with these methods are inconclusive. We thus discuss future possibilities for alleviating these problems as well as methods to clearly expose and evaluate them.
我們展示了潛在解決這些問題的途徑:強化學習和檢索生成機制,不過我們對這些方法的初步實驗并不確定。因此,我們討論了解決這些問題的未來可能性并明確提出和評估這些問題的方法。