论文标题

花瓣:大型模型的合作推理和微调

Petals: Collaborative Inference and Fine-tuning of Large Models

论文作者

Borzunov, Alexander, Baranchuk, Dmitry, Dettmers, Tim, Ryabinin, Max, Belkada, Younes, Chumachenko, Artem, Samygin, Pavel, Raffel, Colin

论文摘要

许多NLP任务受益于使用通常具有超过1000亿参数的大语言模型(LLM)。随着Bloom-176b和Opt-175B的发布,每个人都可以下载该规模的验证模型。尽管如此,使用这些模型仍需要许多研究人员无法获得高端硬件。在某些情况下,LLM可以通过RAM卸载或托管API更实惠。但是,这些技术具有先天的局限性:对于交互推理而言,卸载太慢,而API对于需要访问权重,注意力或逻辑的研究不足以灵活。在这项工作中,我们提出了花瓣 - 通过加入多个政党的资源来合作的推理和微调系统的系统。我们证明,这种策略的表现优于非常大型型号,在消费者GPU上运行Bloom-176b的推理,每秒$ \ $ \ $ \ $ \ $ \ $ \ $ \ $ \ $ \ $ 1,这足以容纳许多交互式LLM应用程序。与大多数推理API不同,花瓣还本质地揭示了已服务模型的隐藏状态,从而可以根据有效的微调方法训练和共享自定义模型扩展。

Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with $\approx$ 1 step per second, which is enough for many interactive LLM applications. Unlike most inference APIs, Petals also natively exposes hidden states of served models, allowing to train and share custom model extensions based on efficient fine-tuning methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源