Amazon launches generative AI play in AWS Bedrock

2023-04-17
关注

Amazon is the latest hyperscaler to take on the world of foundation AI including generative and large language models. It has launched a new platform called AWS Bedrock that includes access to in-house tools such as the Titan family of foundation models, and pre-trained models from start-ups like AI21 Labs, Anthropic and Stability AI. The company says the focus is on providing a range of models for use in “enterprise-scale” AI tools. One expert said Amazon has “a long way to go” to catch up with other players in the field.

Amazon says Bedrock will be for enterprise-scale AI tools and allow access to multiple models within a project
Amazon says Bedrock will be for enterprise-scale AI tools and allow access to multiple models within a project.

Opening AWS up as a marketplace for multiple AI models mirrors moves by Google to offer those made by third parties in Google Cloud alongside its own PaLM, including from Midjourney and AI21 Labs. Microsoft has gone “all in” with OpenAI through its Azure cloud, offering GPT-4, ChatGPT and other models for customers.

Amazon says it will allow companies to train chatbots and AI tools on their own proprietary data without having to invest in costly data centres and expensive AI chips. AWS will use a combination of its own custom AI chips and those from Nvidia. “We’re able to land hundreds of thousands of these chips, as we need them,” explained Dave Brown, VP of Elastic Compute Cloud at AWS.

The launch of Bedrock has been in the works for the past few months, with AWS signing partnership agreements with Stability AI and other start-ups, as well as investing more in generative AI apps and its underlying technology. Hugging Face has also worked to bring its library of text-generating models onto AWS and Amazon has launched an AI accelerator for startups.

AWS is the largest hyperscaler in the world but is facing increasing competition from Google Cloud, Microsoft Azure and others, largely off the back of their AI offerings. Both companies have invested heavily in general AI tools including in chatbots such as ChatGPT and Google Bard.

Amazon hasn’t unveiled the pricing for its AI offerings yet and full details aren’t clear but users will be able to tap into the various foundation models via an API. It is focused on “enterprise-scale” apps rather than individual tools as it is designed for scale.

Multiple AI models

AI21 Labs Jurassic-2 family of foundation models are particularly suited to generating multilingual text, while Anthropic’s Claud is good for text-processing and conversational tools. Stability AI brings text-to-image tools to Bedrock including Stable Diffusion which can be used for images, art, logos and graphic design. The most recent version of Stable Diffusion has improved text accuracy and clarity. Using Bedrock, developers will be able to create tools that combine models.

Amazon’s own Titan models include text and embedding. This allows for text generation like writing a blog post or a sales pitch, where embedding can translate text into numerical representations to find the semantic meaning of the text.

Content from our partners

The war in Ukraine has changed the cybercrime landscape. SMEs must beware

The war in Ukraine has changed the cybercrime landscape. SMEs must beware

Why F&B must leverage digital to unlock innovation

Why F&B must leverage digital to unlock innovation

Resilience: The power of automating cloud disaster recovery

Resilience: The power of automating cloud disaster recovery

Any of the models can then be further trained on labelled datasets stored in S3, Amazon’s cloud storage tool. Only 20 well-labelled pieces of data is required to make the model work against the proprietary information and none of that data will be used to train the underlying models, according to Amazon.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

“At Amazon, we believe AI and ML are among the most transformational technologies of our time, capable of tackling some of humanity’s most challenging problems. That is why, for the last 25 years, Amazon has invested heavily in the development of AI and ML, infusing these capabilities into every business unit,” the company said in a statement.

In the same statement Amazon highlighted the use of chips to bring down the cost of running generative AI workloads, explaining that these ultra-large models require massive compute power to run in production and so AWS Inferentia chips can be used to make this more efficient and reduce cost at enterprise scale.

AWS Bedrock has ‘a lot of catching up to do’

The company is also opening up its answer to Microsoft’s GitHub Copilot, a tool widely used by developers to help write code. Amazon is making CodeWhisperer available for free for individual developers. It is an AI-powered coding companion that can offer code suggestions based on previously written code or comments. There are no usage limits for the free version, but a paid tier, for professional use, also includes enterprise security and admin capabilities.

Daniel Stodolsky, former Google Cloud VP and current SVP of Cloud at SambaNova said the old cloud argument of bringing compute to your data doesn’t stack up in the new world of generative AI. “Whereas other cloud services such as predictive analytics rely on huge volumes of real-time data, Amazon says the process of customising its pre-trained LLM can be completed with as few as 20 labelled data examples,” he said.

“The trend for generative AI will be towards open best-of-breed approaches rather than vendor lock-in and closed models. It’s much better to own a large language model that’s built and fine-tuned for your use-case rather than relying on an off-the-shelf model with minimal customisation.

“The other consideration is getting value from generative AI quickly. Amazon’s Bedrock service is only in limited preview right now and anyone looking at AWS Service Terms will find that Service Level Agreements don’t apply – in other words, it’s not production ready and won’t be for some time. Generative AI is a race that’s already well underway, and Amazon clearly has a lot of catching up to do with other production-ready platforms.”

Homepage image by Dennis Diatel/Shutterstock

Read more: Google AI: PaLM model open to devs and added to Workspace

Topics in this article : AI , AWS , Cloud

参考译文
亚马逊在AWS Bedrock上推出生成式人工智能游戏
亚马逊是最新一家进军基础人工智能领域的超大规模企业,包括生成和大型语言模型。它推出了一个名为AWS Bedrock的新平台,该平台可以访问Titan系列基础模型等内部工具,以及AI21 Labs、Anthropic和Stability AI等初创公司的预训练模型。该公司表示,重点是提供一系列用于“企业级”人工智能工具的模型。一位专家表示,要赶上该领域的其他参与者,亚马逊还有“很长的路要走”。将AWS开放为多种人工智能模型的市场,反映了谷歌的举措,即在谷歌云上提供第三方制造的模型,以及自己的PaLM,包括Midjourney和AI21 Labs。微软通过其Azure云“全力投入”OpenAI,为客户提供GPT-4、ChatGPT和其他模型。亚马逊表示,它将允许企业在自己的专有数据上训练聊天机器人和人工智能工具,而不必投资昂贵的数据中心和昂贵的人工智能芯片。AWS将结合使用自己定制的人工智能芯片和英伟达的芯片。AWS弹性计算云副总裁戴夫•布朗解释说:“我们能够在需要的时候部署数十万个这样的芯片。过去几个月,AWS一直在筹备推出Bedrock,与Stability AI和其他初创企业签署了合作协议,并加大了对生成式AI应用及其底层技术的投资。hug Face还致力于将其文本生成模型库引入AWS,亚马逊还为初创企业推出了一个人工智能加速器。AWS是全球最大的超大规模云计算服务提供商,但正面临来自谷歌Cloud、微软Azure等公司日益激烈的竞争,这主要得益于它们的人工智能产品。两家公司都在通用人工智能工具上投入了大量资金,包括ChatGPT和b谷歌Bard等聊天机器人。亚马逊尚未公布其人工智能产品的定价,详细信息也不清楚,但用户可以通过API使用各种基础模型。它专注于“企业级”应用程序,而不是个人工具,因为它是为规模化而设计的。AI21 Labs的Jurassic-2系列基础模型特别适合生成多语言文本,而Anthropic的claude则适合于文本处理和会话工具。稳定AI带来了文本到图像的工具基岩,包括稳定扩散,可用于图像,艺术,标志和平面设计。最新版本的稳定扩散提高了文本的准确性和清晰度。使用Bedrock,开发人员将能够创建组合模型的工具。亚马逊自己的Titan模型包括文本和嵌入。这允许文本生成,如写博客文章或销售宣传,其中嵌入可以将文本转换为数字表示,以找到文本的语义含义。然后,任何模型都可以在存储在Amazon云存储工具S3中的标记数据集上进行进一步训练。根据亚马逊的说法,只需要20个标记良好的数据片段,就可以使模型与专有信息相匹配,而且这些数据都不会用于训练底层模型。“在亚马逊,我们相信人工智能和机器学习是我们这个时代最具变革性的技术之一,能够解决一些人类最具挑战性的问题。这就是为什么,在过去的25年里,亚马逊在人工智能和机器学习的发展上投入了大量资金,将这些能力注入到每个业务部门,”该公司在一份声明中表示。在同一份声明中,亚马逊强调了使用芯片来降低运行生成式人工智能工作负载的成本,并解释说,这些超大型模型需要大量的计算能力才能在生产中运行,因此AWS interentia芯片可以用来提高效率,并降低企业规模的成本。 该公司还开放了对微软GitHub Copilot的回应,GitHub Copilot是开发人员广泛使用的帮助编写代码的工具。亚马逊正在为个人开发者免费提供CodeWhisperer。它是一个人工智能驱动的编码伴侣,可以根据以前编写的代码或注释提供代码建议。免费版本没有使用限制,但对于专业用途的付费层,还包括企业安全和管理功能。b谷歌Cloud前副总裁、SambaNova现任云计算高级副总裁丹尼尔•斯托多尔斯基(Daniel Stodolsky)表示,将计算带入数据的旧云论点在生成式人工智能的新世界中并不成立。他表示:“预测分析等其他云服务依赖于大量实时数据,而亚马逊表示,定制其预训练法学硕士的过程只需20个标记数据示例即可完成。”“生成式人工智能的趋势将是走向开放的最佳方法,而不是供应商锁定和封闭的模型。拥有一个为您的用例构建和微调的大型语言模型要比依赖一个具有最小定制的现成模型要好得多。“另一个考虑是快速从生成人工智能中获得价值。亚马逊的Bedrock服务现在只在有限的预览中,任何查看AWS服务条款的人都会发现服务水平协议不适用——换句话说,它还没有为生产做好准备,而且在一段时间内不会。生成式人工智能是一场已经开始的竞赛,亚马逊显然还有很多地方要赶上其他生产就绪的平台。主页图片由Dennis Diatel/Shutterstock提供
  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

新的GPT-4 AI可以做什么

提取码
复制提取码
点击跳转至百度网盘