HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/openai/cli/_api/__pycache__/completions.cpython-310.pyc
o

���g�@s�ddlmZddlZddlmZmZmZddlmZddl	m
Z
ddlmZddl
mZd	d
lmZmZd	dl
mZddlmZdd
lmZd	dlmZerTddlmZddd�ZGdd�de�ZGdd�d�ZdS)�)�annotationsN)�
TYPE_CHECKING�Optional�cast)�ArgumentParser)�partial)�
Completion�)�
get_client�)�	NOT_GIVEN�
NotGivenOr)�is_given)�CLIError)�	BaseModel)�Stream)�_SubParsersAction�	subparser�!_SubParsersAction[ArgumentParser]�return�NonecCs|�d�}|jddddd�|jddd	d
�|jddd
d�|jdddtd�|jdddtd�|jdddtd�|jdddtd�|jddtd�|jddtd�|jd d!d
d�|jd"d#td�|jd$d%td�|jd&d'd
�|jd(d)d
�|jd*d+d
�|jtjtd,�dS)-Nzcompletions.createz-mz--modelzThe model to useT)�help�requiredz-pz--promptz#An optional prompt to complete from)rz--streamzStream tokens as they're ready.�
store_true)r�actionz-Mz--max-tokensz(The maximum number of tokens to generate)r�typez-tz
--temperaturez�What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.

Mutually exclusive with `top_p`.z-Pz--top_paAn alternative to sampling with temperature, called nucleus sampling, where the considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%% probability mass are considered.

            Mutually exclusive with `temperature`.z-nz--nz5How many sub-completions to generate for each prompt.z
--logprobsaInclude the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. So for example, if `logprobs` is 10, the API will return a list of the 10 most likely tokens. If `logprobs` is 0, only the chosen tokens will have logprobs returned.z	--best_ofz�Generates `best_of` completions server-side and returns the 'best' (the one with the highest log probability per token). Results cannot be streamed.z--echoz2Echo back the prompt in addition to the completionz--frequency_penaltyz�Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.z--presence_penaltyz�Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.z--suffixz:The suffix that comes after a completion of inserted text.z--stopz3A stop sequence at which to stop generating tokens.z--userzbA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.)�func�
args_model)�
add_parser�add_argument�int�float�set_defaults�CLICompletions�create�CLICompletionCreateArgs)r�sub�r'�F/usr/local/lib/python3.10/dist-packages/openai/cli/_api/completions.py�registersx
����������r)c@s�eZdZUded<dZded<dZded<eZd	ed
<eZded<eZ	ded
<eZ
ded<eZded<eZd	ed<eZ
ded<eZd	ed<eZd	ed<eZded<eZded<eZded<dS)r%�str�modelF�bool�streamNz
Optional[str]�promptzNotGivenOr[int]�nzNotGivenOr[str]�stop�userzNotGivenOr[bool]�echo�suffix�best_ofzNotGivenOr[float]�top_p�logprobs�
max_tokens�temperature�presence_penalty�frequency_penalty)�__name__�
__module__�__qualname__�__annotations__r-r.rr/r0r1r2r3r4r5r6r7r8r9r:r'r'r'r(r%]s 
r%c@s6eZdZeddd��Zedd	d
��Zedd
d��ZdS)r#�argsr%rrcCs�t|j�r|jdkr|jrtd��tt�jj|j|j|j	|j
|j|j|j
|j|j|j|j|j|j|jd�}|jrEt�ttt|dd���St�|��S)N�z6Can't stream completions with n>1 with the current CLI)r/r2r0r1r+r5r.r3r4r6r7r8r9r:T)r-)rr/r-rrr
�completionsr$r2r0r1r+r5r.r3r4r6r7r8r9r:r#�_stream_createrrr�_create)r?�make_requestr'r'r(r$qs0��zCLICompletions.create�
completionrcCsjt|j�dk}|jD](}|rtj�d�|j��tj�|j�|s'|j�d�s-tj�d�tj�	�q
dS)Nr@z===== Completion {} =====
�
)
�len�choices�sys�stdout�write�format�index�text�endswith�flush)rE�should_print_header�choicer'r'r(rC�s
�zCLICompletions._creater-�Stream[Completion]cCs�|D]5}t|j�dk}t|jdd�d�D]"}|r"tj�d�|j��tj�|j�|r1tj�d�tj�	�qqtj�d�dS)Nr@cSs|jS)N)rM)�cr'r'r(�<lambda>�sz/CLICompletions._stream_create.<locals>.<lambda>)�keyz===== Chat Completion {} =====
rF)
rGrH�sortedrIrJrKrLrMrNrP)r-rErQrRr'r'r(rB�s�zCLICompletions._stream_createN)r?r%rr)rErrr)r-rSrr)r;r<r=�staticmethodr$rCrBr'r'r'r(r#ps
r#)rrrr)�
__future__rrI�typingrrr�argparser�	functoolsr�openai.types.completionr�_utilsr
�_typesrr
r�_errorsr�_modelsr�
_streamingrrr)r%r#r'r'r'r(�<module>s"
H