It’s been a little over a year since the OpenAI API was opened without a waiting list. What does it give access to? What features does the OpenAI API provide access to, and how to implement them? At the beginning of 2022, we took a quick tour of the owner. Since then, the functional scope has evolved. The structure and entry points of the API have changed, as well as, to a lesser extent, terminology. For example, for the categorization of underlying models, the concept of motors has given way to that of families. There remain three of these:
The GPT-3 family still includes four models under the same names as initially. And on the same principle, as we progress in alphabetical order, they become more efficient. But also more expensive. OpenAI advises experimenting with Davinci and gradually working your way down to the right compromise. Keeping these task-to-model mappings in mind:
GPT-3 templates are used with the /completions endpoint. We can now submit tasks previously assigned to other endpoints to it: /classifications, /search (semantic search), and /answers (Q&A). Since our first overview, two experimental options have been added to /completions. One allows Davinci to insert the answers into the original statement. The other is to edit this same statement. It uses a specific template (text-davinci-edit-001).
Older versions of Ada, Babbage, Curie, and Davinci remain available. Preferably, could you not use them as is, but refine them? This is with the /fine-tunes endpoint, possibly having previously uploaded training data ( /files ). By default, it is Curie that we refine. A command line tool can help validate and reformat the dataset .
Content filter still exists. With the same role as initially: to estimate the “sensitivity” of the results produced by the GPT-3 and Codex models. However, the recommended endpoint has changed to /moderation . Two models are available, depending on whether you want to have the most recent (text-moderation-latest) or the latest stable version (text-moderation-stable).
Another endpoint, another function,/embeddings, allows you to create vector representations of character strings. Other models can then be them to assess the proximity between these chains, for example, in the context of search engines, recommendation systems, or anomaly detection tools. With /embeddings, you can use no less than 16 first-generation models… and one generation (the default: text-embedding-ada-002).
Also Read: Promising Areas Of Artificial Intelligence Application
The “big” addition of 2022 is the DALL-E API. In public beta since November, it has had three options: create images ( /image-generations ), edit them ( /image-edits ), or make variations ( /image-variations ). The first option generates, from a request of 1000 characters maximum, square images of 256, 512, or 1024 pixels on each side. By default, one at a time, but you can push it up to ten. Two output formats are possible: either in Base64 or in the form of a URL that remains valid for one hour.
The “edit” option involves uploading both an image and a mask. In fact a second image, of the exact dimensions, and whose transparent parts correspond to those which will be edited. The image and mask must be in PNG, in a square format, and weigh less than 4 MB. The limit for the textual instruction is the same: 1000 characters. The third option uses the same settings without a mask.
There are always two channels made in OpenAI to access the API: Python bindings and a Node.js library. The others come from community initiatives, encompassing around ten platforms from C#/.NET to Unreal Engine. The billing unit is always the token . It is equivalent to a “piece of word”: approximately 4 characters. OpenAI offers an online tool to check the weight of a request.
Engines, models, endpoints… Not so easy to navigate OpenAI’s commercial offering, as it has grown in volume since its launch in mid-2020 . It wasn’t that long ago (November 2021) that the API was accessible without a waiting list . The underlying models are categorized into three drivers:
In the GPT-3 category are four models: Ada, Babbage, Curie, and Davinci. Their old versions – brought together under the Instruct banner – remain accessible. All accept text as input and produce text as output. As you progress in alphabetical order, models become more efficient: they need fewer instructions to do as much as those before them. But they also cost more to use. And can lead to longer treatments. In general, OpenAI advises experimenting with Davinci and gradually working your way down to the right compromise. Keeping these task-to-model mappings in mind:
In the Codex family, two models also 100% text, descendants of the first GPT-3:
Accepting up to 4096 tokens per request, it is ideal for translating natural language into code.
With up to 2048 tokens /requests, it is better suited to real-time applications.
What exactly are tokens ? It is the basic inference unit of OpenAI. Basically, the text, both input and output, is divided so that four characters = one token. It is also the billing unit. The third engine (Content filter) currently consists of a single model.
It is provided with content that it classifies as safe, sensitive, or inappropriate. Several options make it possible to adjust its rigor, including the definition of a minimum certainty threshold. The content filter needs help with certain text styles (fiction, code, poetry, etc.) and specific formats (frequent line breaks, word repetitions, etc.). Furthermore, as with all other models, its knowledge base ends in 2019. A continuous training mechanism is in the works at OpenAI.
To use a Content filter, go through the referenced endpoint: /completions. There are three others, intended respectively, for classification, semantic search, and Q&A. Two official channels to reach these HTTP endpoints : Python and Node.js libraries. The community has developed others (C#/.NET, Crystal, Dart, Go, Java, PHP, Ruby, Unity, and Unreal Engine). Models are provided with instructions and, if possible, context. While possibly configuring certain elements. For example, “temperature”: the closer it is to 0, the more deterministic the model is; the closer it is to 1, the more risks it takes.
By default, with /completions, the API is stochastic (it produces different results on each call). The idea is to talk to him like you would speak to a schoolboy. The result is numerous potential uses: classification, discourse production, transformation (summary, translation, reformulation of concepts, etc.), factual responses, etc. and request for Codex. The latter can transform instructions into code as well as add comments, complete a line, or suggest a helpful element (library, API call, etc.). OpenAI gives some advice, including:
With Python, for example, Codex handles the unconventional method of triple quotes better than that of the pound sign.
In beta, /classifications are similar to autoML. We provide labeled examples, either on the fly (maximum 200) or through preloaded files (maximum 150 MB per file and 1 GB in total). Without requiring ad hoc training , it returns the most relevant examples for a given query – after prior filtering of the examples by semantic scoring .
Rather than providing them with examples each time, OpenAI models can be trained with custom datasets. Billing also depends on the tokens used (number of tokens in the training files x number of cycles). We are also here on JSON in lines, with request-response pairs. OpenAI offers a command line tool to help prepare data from other formats (CSV, TSV, XLSX, JSON).
We train Curie. But the other three representatives of the GPT-3 family are compatible.
Once a model has been introduced, we can communicate it as a parameter to /completions. Depending on the tasks, training will require more or fewer examples: at least 100 per category for classification, at least 500 for conditional text production, several thousand for unconstrained production, etc.OpenAI reserves the right to use the data provided to its models to improve them. New users have an initial spending limit. It evolves as we develop uses. When we exceed five people using the API, we go live—a transition that is not automatic and requires risk assessment type checks.
Also Read: Benefits That Artificial Intelligence Brings To Your Company’s HR
As someone who’s spent years working in an office setting, I’ve seen firsthand how energy… Read More
Background checks are a staple in the hiring process. They can make or break a… Read More
There's so much talk about AI at the moment, with a lot of opinions on… Read More
Improving user experience (UX) is not just about making things look pretty; it's about creating… Read More
In the incessant whirlwind of technological advances, where new smartphone launches follow one another at… Read More
What should organizations consider while searching for answers to secure their cross-breed server farm? Against… Read More