HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/openai/lib/__pycache__/_validators.cpython-310.pyc
o

���g؉�@s�UddlmZddlZddlZddlmZmZmZmZm	Z	ddl
mZddlm
ZGdd�de	�Zed	d
d�Zd`dd�Zdadd�Zddgfdbdd�Zdcdddd�Zddgfdbdd�Zd`d d!�Zd`d"d#�Zd`d$d%�Zd`d&d'�Zd`d(d)�Zd`d*d+�Zded/d0�Zddgfdfd3d4�Zd`d5d6�Zdgd8d9�Zdhd=d>�Z did@dA�Z!djdCdD�Z"dkdFdG�Z#dldIdJ�Z$dmdLdM�Z%dndNdO�Z&dodpdSdT�Z'dUZ(dVe)dW<dqdYdZ�Z*drd^d_�Z+dS)s�)�annotationsN)�Any�TypeVar�Callable�Optional�
NamedTuple)�	TypeAlias�)�pandasc@s^eZdZUded<dZded<dZded<dZded<dZded	<dZded
<dZ	ded<dS)�Remediation�str�nameNz
Optional[str]�
immediate_msg�
necessary_msgzOptional[Callable[[Any], Any]]�necessary_fn�optional_msg�optional_fn�	error_msg)
�__name__�
__module__�__qualname__�__annotations__rrrrrr�rr�A/usr/local/lib/python3.10/dist-packages/openai/lib/_validators.pyrs
r�OptionalDataFrameTzOptional[pd.DataFrame])�bound�df�pd.DataFrame�returncCs8d}t|�|kr
dnd}dt|��d|��}td|d�S)z�
    This validator will only print out the number of examples and recommend to the user to increase the number of examples if less than 100.
    �d�z�. In general, we recommend having at least a few hundred examples. We've found that performance tends to linearly increase for every doubling of the number of examplesz
- Your file contains z prompt-completion pairs�num_examples�r
r)�lenr)r�MIN_EXAMPLES�optional_suggestionrrrr�num_examples_validators��r&�necessary_columnrcs�ddd��d}d}d}d}�|jvr9�d	d
�|jD�vr3d��fdd�}|}d
��d�}d��d�}nd��d�}td||||d�S)z[
    This validator will ensure that the necessary column is present in the dataframe.
    rr�columnrrcs2�fdd�|jD�}|j|d���idd�|S)Ncs g|]}t|����kr|�qSr�r�lower��.0�c�r(rr�
<listcomp>-s zInecessary_column_validator.<locals>.lower_case_column.<locals>.<listcomp>rT)�columns�inplace)r0�renamer*)rr(�colsrr.r�lower_case_column,sz5necessary_column_validator.<locals>.lower_case_columnNcSsg|]}t|����qSrr)r+rrrr/7�z.necessary_column_validator.<locals>.<listcomp>c�
�|��S�Nr)r�r4r'rr�lower_case_column_creator9�
z=necessary_column_validator.<locals>.lower_case_column_creatorz
- The `z ` column/key should be lowercasezLower case column name to `�`z^` column/key is missing. Please make sure you name your columns/keys appropriately, then retryr')r
rrrr)rrr(rrr)rrrr)r0r)rr'rrrrr9rr8r�necessary_column_validator's&

�r<�prompt�
completion�fields�	list[str]cs�g}d}d}d}t|j�dkrM�fdd�|jD�}d}|D]��fdd�|D�}t|�dkr9|d��d	��d
�7}qd|�|��}d|��}d�fdd�}td|||d�S)zK
    This validator will remove additional columns from the dataframe.
    Nr	csg|]}|�vr|�qSrrr+�r?rrr/Ur5z/additional_column_validator.<locals>.<listcomp>r csg|]}�|vr|�qSrrr+)�acrrr/Xr5rz9
  WARNING: Some of the additional columns/keys contain `z<` in their name. These will be ignored, and the column/key `z`` will be used instead. This could also result from a duplicate column/key in the provided file.zh
- The input file should contain exactly two columns/keys per row. Additional columns/keys present are: z Remove additional columns/keys: �xrrcs|�Sr7r�rCrArrr^sz1additional_column_validator.<locals>.necessary_fn�additional_column�r
rrr�rCrrr)r#r0r)rr?�additional_columnsrrr�warn_message�dupsr)rBr?r�additional_column_validatorKs*�
�rK�fieldcs�d}d}d}|��dd����s|�����rH|�dk|���B}|��j|��}d��d|��}d�fd
d�}dt|��d
��d�}td���|||d�S)zA
    This validator will ensure that no completion is empty.
    NcSs|dkS)Nr rrDrrr�<lambda>qsz+non_empty_field_validator.<locals>.<lambda>r z
- `z?` column/key should not contain empty strings. These are rows: rCrrcs||�dkj�gd�S)Nr ��subset)�dropnarD�rLrrrvsz/non_empty_field_validator.<locals>.necessary_fn�Remove z rows with empty �s�empty_rFrG)�apply�any�isnull�reset_index�index�tolistr#r)rrLrrr�
empty_rows�
empty_indexesrrQr�non_empty_field_validatoris&�r]cs�|j�d�}|��j|��}d}d}d}t|�dkr:dt|��dd����d|��}dt|��d	�}d�fd
d�}td|||d�S)zY
    This validator will suggest to the user to remove duplicate rows if they exist.
    rNNr�
- There are z duplicated �-z sets. These are rows: rRz duplicate rowsrCrrcs|j�d�S)NrN)�drop_duplicatesrDrArrr��z.duplicated_rows_validator.<locals>.optional_fn�duplicated_rows�r
rrrrG)�
duplicatedrXrYrZr#�joinr)rr?rb�duplicated_indexesrrrrrAr�duplicated_rows_validator�s �rgcs�d}d}d}t|�}|dkr8ddd���|��t��d	kr8d
t���d��d�}d
t���d�}d��fdd�}td|||d�S)zW
    This validator will suggest to the user to remove examples that are too long.
    N�open-ended generation�drrrcSs$|jdd�dd�}|��j|��S)NcSst|j�t|j�dkS)Ni')r#r=r>rDrrrrM�r5zClong_examples_validator.<locals>.get_long_indexes.<locals>.<lambda>�)�axis)rUrXrYrZ)ri�
long_examplesrrr�get_long_indexes�sz1long_examples_validator.<locals>.get_long_indexesrr^z. examples that are very long. These are rows: zf
For conditional generation, and for classification the examples shouldn't be longer than 2048 tokens.rRz long examplesrCcs8�|�}�|krtj�dt|��d|�d��|�|�S)NzeThe indices of the long examples has changed as a result of a previously applied recommendation.
The z? long examples to be dropped are now at the following indices: �
)�sys�stdout�writer#�drop)rC�long_indexes_to_drop�rm�long_indexesrrr�s�
z,long_examples_validator.<locals>.optional_fnrlrc)rirrrrG)�infer_task_typer#r)rrrr�ft_typerrtr�long_examples_validator�s"
�rxcspd}d}d}d}d�gd�}|D]}|dkr |jj�d���r q|jjj|dd���r,q|���dd�}t|�}|d	krBtd
d�Sd$dd��t|jdd�}	|j|	k��rad|	�d�}td
|d�S|	dkr�|	�dd�}
d|
�d�}t	|	�dkr|d|�d�7}|jjdt	|	��jj|	dd���r�|d|	�d�7}nd}|	dkr�d|�d�}d%��fd d!�}td"||||d#�S)&z�
    This validator will suggest to add a common suffix to the prompt if one doesn't already exist in case of classification or conditional generation.
    Nz


### =>

)� ->z

###

z

===

z

---

z

===>

z

--->

ryrnF��regex�\nrh�
common_suffix�r
rCr�suffixrcS�|d|7<|S�Nr=r�rCrrrr�
add_suffix��z2common_prompt_suffix_validator.<locals>.add_suffix��xfixzAll prompts are identical: `zt`
Consider leaving the prompts blank if you want to do open-ended generation, otherwise ensure prompts are different�r
rr z 
- All prompts end with suffix `r;�
�R. This suffix seems very long. Consider replacing with a shorter suffix, such as `z5
  WARNING: Some of your prompts contain the suffix `zZ` more than once. We strongly suggest that you review your prompts and add a unique suffixa�
- Your data does not contain a common separator at the end of your prompts. Having a separator string appended to the end of the prompt makes it clearer to the fine-tuned model where the completion should begin. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detail and examples. If you intend to do open-ended generation, then you should leave the prompts emptyzAdd a suffix separator `z` to all promptscr6r7rrD�r��suggested_suffixrrr�r:z3common_prompt_suffix_validator.<locals>.optional_fn�common_completion_suffix�r
rrrr�rCrrrrrrG)
r=r�containsrV�replacervr�get_common_xfix�allr#)rrrrr�suffix_options�
suffix_option�display_suggested_suffixrwr}�common_suffix_new_line_handledrr�r�common_prompt_suffix_validator�sT

&��r�cs�d}d}d}d}t|jdd���dkrtdd�Sddd��|j�k��r)tdd�S�dkrKd
��d�}|t��krK|d7}d��d�}d��fdd�}td|||d�S)zd
    This validator will suggest to remove a common prefix from the prompt if a long one exist.
    �N�prefixr�r �
common_prefixr~rCrrcSs|djt|�d�|d<|Sr��rr#)rCr�rrr�remove_common_prefixsz<common_prompt_prefix_validator.<locals>.remove_common_prefixz"
- All prompts start with prefix `r;z�. Fine-tuning doesn't require the instruction specifying the task, or a few-shot example scenario. Most of the time you should only add the input data into the prompt, and the desired output into the completion�Remove prefix `z` from all promptscs
�|��Sr7rrD�r�r�rrr!r:z3common_prompt_prefix_validator.<locals>.optional_fn�common_prompt_prefixrc)rCrr�rrrrG)r�r=rr�r#�r�MAX_PREFIX_LENrrrrr�r�common_prompt_prefix_validators,


�r�cs�d}t|jdd��t��dko�ddk�t��|kr tdd�Sddd
��|j�k��r1tdd�Sd��d�}d��d�}d���fdd�}td|||d�S)zh
    This validator will suggest to remove a common prefix from the completion if a long one exist.
    �r�r�r� r�r~rCr�	ws_prefixrcSs4|djt|�d�|d<|rd|d��|d<|S)Nr>r�r�)rCr�r�rrrr�7sz@common_completion_prefix_validator.<locals>.remove_common_prefixz&
- All completions start with prefix `z_`. Most of the time you should only add the output data into the completion, without any prefixr�z` from all completionscs�|���Sr7rrD�r�r�r�rrrEraz7common_completion_prefix_validator.<locals>.optional_fn�common_completion_prefixrcN)rCrr�rr�rrrrG)r�r>r#rr�r�rr�r�"common_completion_prefix_validator,s"


�r�csbd}d}d}d}t|�}|dks|dkrtdd�St|jdd�}|j|k��r6d|�d	|�d
�}td|d�Sd�gd
�}|D]}|jjj|dd���rLq>|���dd�}	d$dd��|dkr�|�dd�}
d|
�d
�}t	|�dkrx|d|	�d
�7}|jjdt	|��jj|dd���r�|d|�d�7}nd}|dkr�d|	�d�}d%��fd d!�}td"||||d#�S)&z�
    This validator will suggest to add a common suffix to the completion if one doesn't already exist in case of classification or conditional generation.
    Nrh�classificationr}r~rr�z All completions are identical: `zJ`
Ensure completions are different, otherwise the model will just repeat `r;r�z [END])	rn�.z ENDz***z+++z&&&z$$$z@@@z%%%Frzrnr|rCrrcSr��Nr>rr�rrrr�vr�z6common_completion_suffix_validator.<locals>.add_suffixr z$
- All completions end with suffix `r�r�z9
  WARNING: Some of your completions contain the suffix `zU` more than once. We suggest that you review your completions and add a unique endingaH
- Your data does not contain a common ending at the end of your completions. Having a common ending string appended to the end of the completion makes it clearer to the fine-tuned model where the completion should end. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detail and examples.zAdd a suffix ending `z` to all completionscr6r7rrDr�rrr�r:z7common_completion_suffix_validator.<locals>.optional_fnr�r�r�rG)
rvrr�r>r�rr�rVr�r#)rrrrrrwr}r�r�r�r�rr�r�"common_completion_suffix_validatorPsN

&��r�cCs^ddd�}d}d}d}|jjdd���dks!|jjddd	kr'd
}d}|}td|||d
�S)z�
    This validator will suggest to add a space at the start of the completion if it doesn't already exist. This helps with tokenization.
    rCrrcSs|d�dd��|d<|S)Nr>cSs|�d�r	d|Sd|S)Nr�r )�
startswith)rSrrrrM��zLcompletions_space_start_validator.<locals>.add_space_start.<locals>.<lambda>)rUrDrrr�add_space_start�sz:completions_space_start_validator.<locals>.add_space_startNrjrr�z�
- The completion should start with a whitespace character (` `). This tends to produce better results due to the tokenization we use. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detailsz=Add a whitespace character to the beginning of the completion�completion_space_startrcrG)r>r�nunique�valuesr)rr�rrrrrr�!completions_space_start_validator�s
,�r�r(r�Remediation | Nonecspd�fdd�}|��dd����}|��dd����}|d	|kr6td
d��d��d
�d��d�|d�SdS)zt
    This validator will suggest to lowercase the column values, if more than a third of letters are uppercase.
    rCrrcs|�j��|�<|Sr7r)rDr.rr�
lower_case�sz(lower_case_validator.<locals>.lower_casecS�tdd�|D��S)Ncs�$�|]
}|��r|��rdVqdS�rjN)�isalpha�isupperr+rrr�	<genexpr>���"�9lower_case_validator.<locals>.<lambda>.<locals>.<genexpr>��sumrDrrrrM��z&lower_case_validator.<locals>.<lambda>cSr�)Ncsr�r�)r��islowerr+rrrr��r�r�r�rDrrrrM�r�r	r�z
- More than a third of your `z%` column/key is uppercase. Uppercase z�s tends to perform worse than a mixture of case encountered in normal language. We recommend to lower case the data if that makes sense in your domain. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detailsz'Lowercase all your data in column/key `r;rcNrG)rUr�r)rr(r��count_upper�count_lowerrr.r�lower_case_validator�s
�r��fname�'tuple[pd.DataFrame | None, Remediation]c

Cs�d}d}d}d}d}tj�|��rQ�z|���d�s!|���d�rF|���d�r*dnd\}}d|�d�}d|�d	�}tj||td
��d�}n�|���d�rnd
}d}t�	|�}	|	j
}
t|
�dkrc|d7}tj|td��d�}n�|���d�r�d}d}t
|d��}|��}tjdd�|�d�D�|td��d�}Wd�n1s�wYn�|���d�r�tj|dtd��d�}t|�dkr�d}d}tj|td��d�}na	n_|���d��rz"tj|dtd��d�}t|�dkr�tj|td��d�}nd }d}Wn4t�y
tj|td��d�}Yn!wd!}d"|v�r&|d#|�d$|�d"�d%�d&�7}n|d#|�d'�7}Wn'ttf�yP|�d"�d%��}d(|�d)|�d*|�d+�}Ynwd,|�d-�}td.|||d/�}||fS)0z�
    This function will read a file saved in .csv, .json, .txt, .xlsx or .tsv format using pandas.
     - for .xlsx it will read the first sheet
     - for .txt it will assume completions and split on newline
    Nz.csvz.tsv)�CSV�,)�TSV�	z=
- Based on your file extension, your file is formatted as a z filez
Your format `z` will be converted to `JSONL`)�sep�dtyper z.xlsxzH
- Based on your file extension, your file is formatted as an Excel filez/Your format `XLSX` will be converted to `JSONL`rjz�
- Your Excel file contains more than one sheet. Please either save as csv or ensure all data is present in the first sheet. WARNING: Reading only the first sheet...)r�z.txtz9
- Based on your file extension, you provided a text filez.Your format `TXT` will be converted to `JSONL`�rcSsg|]}d|g�qS)r r)r,�linerrrr/�sz#read_any_format.<locals>.<listcomp>rn)r0r��.jsonlT)�linesr�z^
- Your JSONL file appears to be in a JSON format. Your file will be converted to JSONL formatz/Your format `JSON` will be converted to `JSONL`z.jsonz^
- Your JSON file appears to be in a JSONL format. Your file will be converted to JSONL formatz]Your file must have one of the following extensions: .CSV, .TSV, .XLSX, .TXT, .JSON or .JSONLr�z Your file `z` ends with the extension `.���z` which is not supported.z` is missing a file extension.zYour file `z!` does not appear to be in valid z9 format. Please ensure your file is formatted as a valid z file.zFile z does not exist.�read_any_format)r
rrr)�os�path�isfiler*�endswith�pd�read_csvr�fillna�	ExcelFile�sheet_namesr#�
read_excel�open�read�	DataFrame�split�	read_json�
ValueError�	TypeError�upperr)
r�r?�remediationrrrr�file_extension_str�	separator�xls�sheets�f�contentrrrr��s�
�
�������
"���r�cCs,t|�}d}|dkrd|�d�}td|d�S)z�
    This validator will infer the likely fine-tuning format of the data, and display it to the user if it is classification.
    It will also suggest to use ada and explain train/validation split benefits.
    Nr�zK
- Based on your data it seems like you're trying to fine-tune a model for z�
- For classification, we recommend you try one of the faster and cheaper models, such as `ada`
- For classification, you can estimate the expected model performance by keeping a held out dataset, which is not used for trainingr!r")rvr)rrwrrrr�format_inferrer_validators
r�r�cCsb|jdurtj�d|j�d|j�d��t�d�|jdur%tj�|j�|jdur/|�|�}|S)zs
    This function will apply a necessary remediation to a dataframe, or print an error message if one exists.
    Nz

ERROR in z validator: z

Aborting...rj)	rro�stderrrqr
�exitrrpr)rr�rrr�apply_necessary_remediation(s




r��
input_text�auto_accept�boolcCs.tj�|�|rtj�d�dSt���dkS)NzY
T�n)rorprq�inputr*)r�r�rrr�accept_suggestion6s
r��tuple[pd.DataFrame, bool]cCsjd}d|j�d�}|jdur!t||�r!|jdusJ�|�|�}d}|jdur1tj�d|j�d��||fS)zc
    This function will apply an optional remediation to a dataframe, based on the user input.
    Fz- [Recommended] z [Y/n]: NTz- [Necessary] rn)rr�rrrorprq)rr�r��optional_appliedr�rrr�apply_optional_remediation>s



r��NonecCslt|�}d}|dkrt|�}|d}n|jdd���}|d}ddd�}||d
�}tj�d|�d��dS)z?
    Estimate the time it'll take to fine-tune the dataset
    g�?r�g
ףp=
�?T)rYg��|?5^�?�time�floatrrcSsd|dkrt|d��d�S|dkrt|dd��d�S|dkr(t|dd��d�St|dd��d�S)	N�<r	z secondsiz minutesi�Qz hoursz days)�round)r�rrr�format_time]sz.estimate_fine_tuning_time.<locals>.format_time�z:Once your model starts training, it'll approximately take z~ to train a `curie` model, and less for `ada` and `babbage`. Queue will approximately take half an hour per job ahead of you.
N)r�r�rr)rvr#�memory_usager�rorprq)r�	ft_format�
expected_timer!�sizer��time_stringrrr�estimate_fine_tuning_timePs



�rr�csd|rddgndg}d}	|dkrd|�d�nd���fdd	�|D�}td
d�|D��s-|S|d7}q)
N�_train�_validr rTz (�)cs,g|]}tj���d�d|���d��qS)r�	_preparedr�)r�r��splitext)r,r�r��index_suffixrrr/rs,z!get_outfnames.<locals>.<listcomp>css�|]	}tj�|�VqdSr7)r�r�r�)r,r�rrrr�ss�z get_outfnames.<locals>.<genexpr>rj)rV)r�r��suffixes�i�candidate_fnamesrrr�
get_outfnamesms�r�tuple[int, object]cCs.|j��}d}|dkr|j��jd}||fS)Nr	r)r>r��value_countsrY)r�	n_classes�	pos_classrrr�get_classification_hyperparamsxs

r�any_remediationsc
Cs�t|�}t|jdd�}t|jdd�}d}d}|dkr!t||�r!d}d}	|�dd	�}
|�dd	�}t|�d
kr;d|�d�nd}d
}|s\|s\tj�	d|�d|	�d|
�d|�d�	�t
|�dSt||��r:t||�}
|r�t|
�dkr{d|
d
vr{d|
dvs}J�d}tt|�|t
t|�d��}|j|dd�}|�|j�}|ddgj|
d
ddddd�|ddgj|
dddddd�t|�\}}|	d7}	|dkr�|	d |�d�7}	n |	d!|��7}	nt|
�dks�J�|ddgj|
d
ddddd�|r�d"ndd#d$�|
�}|�r
d%|
d�d�nd}t|
�d
k�rdnd&|
�d�}tj�	d'|�d(|
d
�d|�|	�d)|�|�d��t
|�dStj�	d*�dS)+aQ
    This function will write out a dataframe to a file, if the user would like to proceed, and also offer a fine-tuning command with the newly created file.
    For classification it will optionally ask the user if they would like to split the data into train/valid files, and modify the suggested command to include the valid set.
    rr�FzQ- [Recommended] Would you like to split into training and validation set? [Y/n]: r�Tr rnr|rz Make sure to include `stop=["z;"]` so that the generated texts ends at the expected place.z@

Your data will be written to a new JSONL file. Proceed [Y/n]: zK
You can use your file for fine-tuning:
> openai api fine_tunes.create -t "�"ue

After you’ve fine-tuned a model, remember that your prompt has to end with the indicator string `zX` for the model to start generating completions, rather than continuing with the prompt.r	�train�validrji�g�������?�*)r��random_stater=r>�recordsN)r��orient�force_ascii�indentz! --compute_classification_metricsz" --classification_positive_class "z --classification_n_classes rSz to `z` and `z -v "ucAfter you’ve fine-tuned a model, remember that your prompt has to end with the indicator string `z
Wrote modified filezd`
Feel free to take a look!

Now use that file when fine-tuning:
> openai api fine_tunes.create -t "z

z#Aborting... did not write the file
)rvr�r=r>r�r�r#rorprqrr�max�int�samplerrrY�to_jsonrre)rr�rr�r�common_prompt_suffixr�r�r��additional_params�%common_prompt_suffix_new_line_handled�)common_completion_suffix_new_line_handled�optional_ending_string�fnames�MAX_VALID_EXAMPLES�n_train�df_train�df_validrr�files_string�valid_string�separator_reminderrrr�write_out_file�sn
���
(����
�(�r1cCs>d}t|jj���dkrdSt|j���t|�|krdSdS)z>
    Infer the likely fine-tuning task type from the data
    �rrhr�zconditional generation)r�r=rr#r>�unique)r�CLASSIFICATION_THRESHOLDrrrrv�srvr�seriesr�cCsnd}	|dkr|jt|�dd�n
|jdt|�d�}|��dkr'	|S||jdkr1	|S|jd}q)zQ
    Finds the longest common suffix or prefix of all the values in a series
    r TrrjNr)rr#r�r�)r5r��common_xfix�
common_xfixesrrrr��s4��
��r�z,Callable[[pd.DataFrame], Remediation | None]r�	Validator�list[Validator]cCs2tdd�dd�tttttdd�dd�tttt	t
gS)NcS�
t|d�Sr��r<rDrrrrM��
z get_validators.<locals>.<lambda>cSr:r�r;rDrrrrM�r<cSr:r��r�rDrrrrM�r<cSr:r�r=rDrrrrM�r<)r&rKr]r�rgrxr�r�r�r�r�rrrr�get_validators�s �r>�
validators�write_out_file_func�Callable[..., Any]c
Cs�g}|dur|�|�|D]}||�}|dur!|�|�t||�}q
tdd�|D��}tdd�|D��}	d}
|rPtj�d�|D]}t|||�\}}|
pM|}
q@ntj�d�|
pY|	}|||||�dS)NcSs$g|]}|jdus|jdur|�qSr7)rr�r,r�rrrr/s
�z$apply_validators.<locals>.<listcomp>cSsg|]	}|jdur|�qSr7)rrBrrrr/r�Fz?

Based on the analysis we will perform the following actions:
z

No remediations found.
)�appendr�rVrorprqr�)
rr�r�r?r�r@�optional_remediations�	validator�&any_optional_or_necessary_remediations�any_necessary_applied�any_optional_appliedr��!any_optional_or_necessary_appliedrrr�apply_validatorss6


����
�rJ)rrrr)rrr'rrr)rrr?r@rr)r>)rrrLrrr)rrr(rrr�)r�rr?r@rr�)rrr�rrr)r�rr�r�rr�)rrr�rr�r�rr�)rrrr�)r�rr�r�rr@)rrrr)
rrr�rrr�r�r�rr�)rrrr)r)r5rr�rrr)rr9)rrr�rr�r�r?r9r�r�r@rArr�),�
__future__rr�ro�typingrrrrr�typing_extensionsr�_extrasr
r�rrr&r<rKr]rgrxr�r�r�r�r�r�r�r�r�r�r�rrrr1rvr�r8rr>rJrrrr�<module>sF


$

%
D
'
$
D
�
Y







K