GPT-3 developer OpenAI releases new Davinci generative text model

2022-12-06 04:36:52
关注

Artificial intelligence company OpenAI has published a new generative text model that it says produces higher-quality writing, can handle complex instructions, and generate longer-form content. Known as text-davinci-003, the model is part of the GPT-3 family and builds on earlier systems.

OpenAI has released the third version of its Davinci model which forms part of GPT-3 (Photo: Grey82/Shutterstock)
OpenAI has released the third version of its Davinci model which forms part of GPT-3. (Photo by Grey82/Shutterstock)

It is built on the Davinci engine, designed to perform a wide range of tasks with fewer instructions to achieve the required output and is particularly useful when in-depth knowledge of a subject matter is required. This includes summarising texts, and producing narrative content and dialogue.

To make it easier to use with more in-depth understanding Davinci-based models are more computationally heavy, which comes with a slightly higher cost per API call when compared to simpler models such as Ada and Babbage.

“This model builds on top of our previous InstructGPT models, and improves on a number of behaviours that we’ve heard are important to you as developers,” OpenAI said in a statement.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

This includes higher quality writing, that OpenAI says will help applications made using the API calls deliver “clearer, more engaging, and more compelling content”, as well as being able to handle more complex instructions “meaning you can get even more creative with how you make use of its capabilities now”.

OpenAI says Davinci is a marked improvement on earlier models when it comes to producing long-form content, in part through in-text instructions, “allowing you to take on tasks that would have previously been too difficult to achieve”.

Asking text-davinci-003 to summarise the main benefits of using generative text AI, it produced the following paragraph: “Generative text AI is a type of Artificial Intelligence (AI) technology that can produce human-like text. It can be used to create content such as stories, articles, and summaries. The main benefits of using generative text AI are that it can save time and money, generate unique content, and create personalised experiences for users.”

This is the response from text-davinci-002, the previous-generation mode, to the same prompt: “There are many benefits to using generative text AI, including the ability to create realistic text, the ability to experiment with different language models, and the ability to create text that is difficult for humans to generate.”

Content from our partners

Why food manufacturers must pursue greater visibility and agility

Why food manufacturers must pursue greater visibility and agility

How to define an empowered chief data officer

How to define an empowered chief data officer

Financial management can be onerous for CFOs, but new tech is helping lighten the load

Financial management can be onerous for CFOs, but new tech is helping lighten the load

The main new capability is the fact it now supports inserting completions within the text. This involves adding a suffix prompt as well as a prefix prompt to transition between paragraphs and better define the flow of the copy.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

OpenAI progressing towards GPT-4?

This is an addition to GPT-3, OpenAI’s massive natural language processing AI model that has some 175 billion parameters that was released in May 2020. Generative Pre-trained Transformer 3, to give it its full title, is a deep-learning AI system which OpenAI trained by feeding it information from millions of websites.

Rumours surrounding its successor, GPT-4 are growing, with some suggestions it could launch at some point between December and February and will have as many as a trillion parameters, making it significantly larger and more ‘human-like’ in its output than GPT-3. Sam Altman, OpenAI CEO, has denied it will be that large and could be similar in size to GPT-3 but more efficient.

Alberto Romero, AI and technology analyst at CambrianAI wrote in a SubStack that early adopters are already being given beta access to GPT-4 and forced to sign an NDA about its functionality with anecdotal evidence suggesting it is “better than people could expect”.

He predicts that GPT-4 will be advanced enough to easily pass the Turing Test, a symbol of the limits of machine intelligence designed by British mathematician Alan Turing in 1950. This is in part inspired by a tweet from Altman on 9 November showing an image of Darth Vader with the caption: “Don’t be too proud of this technological terror you’ve constructed, the ability to pass the Turing test is insignificant next to the power of the force”.

“Turing test is generally regarded as obsolete,” Romero wrote in his article. “In essence, it’s a test of deception (fooling a person) so an AI could theoretically pass it without possessing intelligence in the human sense. It’s also quite narrow, as it’s exclusively focused on the linguistic domain.”

Rumours shared on Reddit, but not verified suggest it will be vast in terms of parameters but sparse, meaning that space is left with elements inactive until needed and leading to a similar overall size as smaller, but more dense, models including GPT-3 itself.

“OpenAI has changed course with GPT-4 a few times throughout these two years, so everything is in the air. We’ll have to wait until early 2023—which promises to be another great year for AI,” Romero said.

Read more: Foundational models are the future of AI. They’re also deeply flawed.

Topics in this article: AI, GPT-3, OpenAI

参考译文
GPT-3开发者OpenAI发布了新的Davinci生成文本模型
人工智能公司 OpenAI 发布了一款新的生成式文本模型,声称该模型可生成质量更高的写作内容,能够处理复杂的指令,并生成更长篇幅的内容。该模型名为 -davinci-003,属于 GPT-3 系列,是在先前系统基础上进一步发展的版本。OpenAI 此次发布了其 Davinci 模型的第三个版本,该模型是 GPT-3 体系的一部分。(照片来源:Grey82/Shutterstock)它基于 Davinci 引擎构建,旨在通过较少的指令完成广泛的任务,特别适用于需要深入主题知识的场景。这包括文本摘要、叙事内容和对话生成。为了便于使用并增强更深层次的理解,基于 Davinci 的模型计算资源消耗更大,因此与 Ada 和 Babbage 等较简单模型相比,每次 API 调用的成本也稍高。OpenAI 在声明中表示:“此模型是在我们之前的 InstructGPT 模型基础上进行的升级,改进了我们听到的开发者们非常关注的一系列行为。” 这包括提供质量更高的写作,OpenAI 表示这将有助于使用 API 调用开发出“更清晰、更吸引人、更有说服力的内容”,还能够处理更复杂的指令,“这意味着你现在可以更灵活、更有创意地利用其功能。”OpenAI 表示,Davinci 模型在生成长篇内容方面相比之前的模型有了显著的提升,部分原因在于其文本内指令功能,“让你可以承担之前难以完成的任务”。 当向 -davinci-003 提出总结使用生成式文本 AI 主要优势的问题时,它给出了如下段落:“生成式文本 AI 是一种人工智能(AI)技术,可以生成类似人类的文本。它可以用于创建如故事、文章和摘要等内容。使用生成式文本 AI 的主要优势在于,它可以节省时间和成本,生成独特内容,并为用户提供个性化的体验。” 这是上一代模型 -davinci-002 对相同提示的回应:“使用生成式文本 AI 的优势很多,包括生成逼真文本的能力、尝试不同语言模型的能力,以及生成对人类来说较难创建的文本。” 内容来自我们的合作伙伴 - 为何食品制造商必须追求更高的可见性和敏捷性 - 如何定义赋能型首席数据官 - 财务管理对 CFO 来说可能很繁重,但新技术正在减轻这一负担 主要的新功能是现在支持在文本中插入补全内容。这包括添加后缀提示与前缀提示,以便段落之间自然过渡并更好地定义文本内容的流程。 查看所有新闻简报 订阅我们的新闻简报 由《Tech Monitor》团队为您呈上数据、洞察与分析 在此订阅 OpenAI 正在朝 GPT-4 推进? 这是 GPT-3 的补充。GPT-3 是 OpenAI 于 2020 年 5 月发布的大型自然语言处理 AI 模型,具有约 1750 亿个参数。其全称为 Generative Pre-trained Transformer 3,是一个深度学习 AI 系统,OpenAI 通过向其输入数百万个网站的信息对其进行训练。关于其继任者 GPT-4 的传闻越来越多,有传言称它可能在 12 月至 2 月之间发布,参数数量可达万亿级,因此在规模和输出的“人类特征”方面将显著优于 GPT-3。OpenAI 首席执行官 Sam Altman 已否认 GPT-4 会达到如此规模,认为其大小可能与 GPT-3 类似,但效率更高。 CambrianAI 的 AI 和技术分析师 Alberto Romero 在 SubStack 上写道,早期用户已获得 GPT-4 的测试版访问权限,并被要求签署保密协议,根据一些非正式反馈,GPT-4“表现得比人们预期的更好”。他预测,GPT-4 的先进程度足以轻松通过图灵测试,这是由英国数学家艾伦·图灵在 1950 年设计的衡量机器智能极限的标志。这一预测部分受到 Altman 11 月 9 日发布的一条推文的启发,他配了一张达斯·维达的图片,并写道:“不要过于骄傲于你所创造的技术恐怖,通过图灵测试的能力在原力面前微不足道。” Romero 在文章中写道:“图灵测试普遍被认为已经过时。从根本上说,它是一场欺骗测试(愚弄人类),因此理论上 AI 可以在不具备人类意义上的智能的前提下通过该测试。它也非常狭窄,因为它仅关注语言领域。” Reddit 上未经证实的传言称,GPT-4 的参数数量将非常巨大,但结构稀疏,这意味着在需要时才激活部分组件,从而整体大小与 GPT-3 这类较小但更密集的模型相似。Romero 表示:“在过去的两年中,OpenAI 几次调整了 GPT-4 的路线,因此一切都还不明朗。我们必须等到 2023 年初——那将是 AI 的又一个伟大年份。” 阅读更多:基础模型是 AI 的未来。它们也存在严重缺陷。 本文主题:AI、GPT-3、OpenAI
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘