File: //usr/local/lib/python3.10/dist-packages/openai/cli/_api/__pycache__/completions.cpython-310.pyc
o
���g � @ s� d dl mZ d dlZd dlmZmZmZ d dlmZ d dl m
Z
d dlmZ ddl
mZ d d
lmZmZ d dl
mZ ddlmZ dd
lmZ d dlmZ erTd dlmZ ddd�ZG dd� de�ZG dd� d�ZdS )� )�annotationsN)�
TYPE_CHECKING�Optional�cast)�ArgumentParser)�partial)�
Completion� )�
get_client� )� NOT_GIVEN�
NotGivenOr)�is_given)�CLIError)� BaseModel)�Stream)�_SubParsersAction� subparser�!_SubParsersAction[ArgumentParser]�return�Nonec C s | � d�}|jddddd� |jddd d
� |jddd
d� |jdddtd� |jdddtd� |jdddtd� |jdddtd� |jddtd� |jddtd� |jd d!d
d� |jd"d#td� |jd$d%td� |jd&d'd
� |jd(d)d
� |jd*d+d
� |jtjtd,� d S )-Nzcompletions.createz-mz--modelzThe model to useT)�help�requiredz-pz--promptz#An optional prompt to complete from)r z--streamzStream tokens as they're ready.�
store_true)r �actionz-Mz--max-tokensz(The maximum number of tokens to generate)r �typez-tz
--temperaturez�What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
Mutually exclusive with `top_p`.z-Pz--top_pa An alternative to sampling with temperature, called nucleus sampling, where the considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%% probability mass are considered.
Mutually exclusive with `temperature`.z-nz--nz5How many sub-completions to generate for each prompt.z
--logprobsa Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. So for example, if `logprobs` is 10, the API will return a list of the 10 most likely tokens. If `logprobs` is 0, only the chosen tokens will have logprobs returned.z --best_ofz�Generates `best_of` completions server-side and returns the 'best' (the one with the highest log probability per token). Results cannot be streamed.z--echoz2Echo back the prompt in addition to the completionz--frequency_penaltyz�Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.z--presence_penaltyz�Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.z--suffixz:The suffix that comes after a completion of inserted text.z--stopz3A stop sequence at which to stop generating tokens.z--userzbA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.)�func�
args_model)�
add_parser�add_argument�int�float�set_defaults�CLICompletions�create�CLICompletionCreateArgs)r �sub� r'