HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/openai/resources/__pycache__/batches.cpython-310.pyc
o

���g1O�@s<ddlmZddlmZddlmZddlZddlmZddl	m
Z
mZddlm
Z
mZmZmZmZdd	lmZmZdd
lmZddlmZmZddlmZmZdd
lmZmZddl m!Z!ddl"m#Z#m$Z$ddl%m&Z&ddgZ'Gdd�de�Z(Gdd�de�Z)Gdd�d�Z*Gdd�d�Z+Gdd�d�Z,Gdd�d�Z-dS)�)�annotations)�Optional)�LiteralN�)�_legacy_response)�batch_list_params�batch_create_params)�	NOT_GIVEN�Body�Query�Headers�NotGiven)�maybe_transform�async_maybe_transform)�cached_property)�SyncAPIResource�AsyncAPIResource)�to_streamed_response_wrapper�"async_to_streamed_response_wrapper)�SyncCursorPage�AsyncCursorPage)�Batch)�AsyncPaginator�make_request_options)�Metadata�Batches�AsyncBatchesc@�eZdZed+dd��Zed,dd��Zeddded	�d-dd�Zddded�d.dd �Zeeddded!�d/d'd(�Z	ddded�d.d)d*�Z
dS)0r�return�BatchesWithRawResponsecC�t|�S�a
        This property can be used as a prefix for any HTTP method call to return
        the raw response object instead of the parsed content.

        For more information, see https://www.github.com/openai/openai-python#accessing-raw-response-data-eg-headers
        )r��self�r$�C/usr/local/lib/python3.10/dist-packages/openai/resources/batches.py�with_raw_response�zBatches.with_raw_response�BatchesWithStreamingResponsecCr �z�
        An alternative to `.with_raw_response` that doesn't eagerly read the response body.

        For more information, see https://www.github.com/openai/openai-python#with_streaming_response
        )r(r"r$r$r%�with_streaming_response'�zBatches.with_streaming_responseN��metadata�
extra_headers�extra_query�
extra_body�timeout�completion_window�Literal['24h']�endpoint�DLiteral['/v1/chat/completions', '/v1/embeddings', '/v1/completions']�
input_file_id�strr-�Optional[Metadata] | NotGivenr.�Headers | Noner/�Query | Noner0�Body | Noner1�'float | httpx.Timeout | None | NotGivenrc		Cs0|jdt||||d�tj�t||||d�td�S)��
        Creates and executes a batch from an uploaded file of requests

        Args:
          completion_window: The time frame within which the batch should be processed. Currently only `24h`
              is supported.

          endpoint: The endpoint to be used for all requests in the batch. Currently
              `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
              Note that `/v1/embeddings` batches are also restricted to a maximum of 50,000
              embedding inputs across all requests in the batch.

          input_file_id: The ID of an uploaded file that contains requests for the new batch.

              See [upload file](https://platform.openai.com/docs/api-reference/files/create)
              for how to upload a file.

              Your input file must be formatted as a
              [JSONL file](https://platform.openai.com/docs/api-reference/batch/request-input),
              and must be uploaded with the purpose `batch`. The file can contain up to 50,000
              requests, and can be up to 200 MB in size.

          metadata: Set of 16 key-value pairs that can be attached to an object. This can be useful
              for storing additional information about the object in a structured format, and
              querying for objects via API or the dashboard.

              Keys are strings with a maximum length of 64 characters. Values are strings with
              a maximum length of 512 characters.

          extra_headers: Send extra headers

          extra_query: Add additional query parameters to the request

          extra_body: Add additional JSON properties to the request

          timeout: Override the client-level default timeout for this request, in seconds
        �/batches�r2r4r6r-�r.r/r0r1��body�options�cast_to)�_postrr�BatchCreateParamsrr�	r#r2r4r6r-r.r/r0r1r$r$r%�create0s3��	��zBatches.creater@�batch_idcCs4|s	td|����|jd|��t||||d�td�S)�F
        Retrieves a batch.

        Args:
          extra_headers: Send extra headers

          extra_query: Add additional query parameters to the request

          extra_body: Add additional JSON properties to the request

          timeout: Override the client-level default timeout for this request, in seconds
        �7Expected a non-empty value for `batch_id` but received �	/batches/r@�rCrD��
ValueError�_getrr�r#rIr.r/r0r1r$r$r%�retrievets��zBatches.retrieve��after�limitr.r/r0r1rT�str | NotGivenrU�int | NotGiven�SyncCursorPage[Batch]cC�2|jdttt||||t||d�tj�d�td�S�a,List your organization's batches.

        Args:
          after: A cursor for use in pagination.

        `after` is an object ID that defines your place
              in the list. For instance, if you make a list request and receive 100 objects,
              ending with obj_foo, your subsequent call can include after=obj_foo in order to
              fetch the next page of the list.

          limit: A limit on the number of objects to be returned. Limit can range between 1 and
              100, and the default is 20.

          extra_headers: Send extra headers

          extra_query: Add additional query parameters to the request

          extra_body: Add additional JSON properties to the request

          timeout: Override the client-level default timeout for this request, in seconds
        r>)rTrU)r.r/r0r1�query)�pagerC�model)�
_get_api_listrrrrr�BatchListParams�r#rTrUr.r/r0r1r$r$r%�list��"!���
�zBatches.listcCs6|s	td|����|jd|�d�t||||d�td�S)�
Cancels an in-progress batch.

        The batch will be in status `cancelling` for up to
        10 minutes, before changing to `cancelled`, where it will have partial results
        (if any) available in the output file.

        Args:
          extra_headers: Send extra headers

          extra_query: Add additional query parameters to the request

          extra_body: Add additional JSON properties to the request

          timeout: Override the client-level default timeout for this request, in seconds
        rKrL�/cancelr@rM�rOrErrrQr$r$r%�cancel�s
��zBatches.cancel)rr)rr(�r2r3r4r5r6r7r-r8r.r9r/r:r0r;r1r<rr�rIr7r.r9r/r:r0r;r1r<rr)rTrVrUrWr.r9r/r:r0r;r1r<rrX��__name__�
__module__�__qualname__rr&r*r	rHrRrarfr$r$r$r%r�8	�J�$�:�c@r)0rr�AsyncBatchesWithRawResponsecCr r!)rnr"r$r$r%r&�r'zAsyncBatches.with_raw_response�!AsyncBatchesWithStreamingResponsecCr r))ror"r$r$r%r*�r+z$AsyncBatches.with_streaming_responseNr,r2r3r4r5r6r7r-r8r.r9r/r:r0r;r1r<rc		�s>�|jdt||||d�tj�IdHt||||d�td�IdHS)r=r>r?Nr@rA)rErrrFrrrGr$r$r%rHs �3��	��zAsyncBatches.creater@rIc�s<�|s
td|����|jd|��t||||d�td�IdHS)rJrKrLr@rMNrNrQr$r$r%rRFs���zAsyncBatches.retrieverSrTrVrUrW�-AsyncPaginator[Batch, AsyncCursorPage[Batch]]cCrYrZ)r^rrrrrr_r`r$r$r%ragrbzAsyncBatches.listc�s>�|s
td|����|jd|�d�t||||d�td�IdHS)rcrKrLrdr@rMNrerQr$r$r%rf�s�
��zAsyncBatches.cancel)rrn)rrorgrh)rTrVrUrWr.r9r/r:r0r;r1r<rrprir$r$r$r%r�rmc@�eZdZddd�ZdS)	r�batchesrr�NonecC�B||_t�|j�|_t�|j�|_t�|j�|_t�|j�|_dS�N)�_batchesr�to_raw_response_wrapperrHrRrarf�r#rrr$r$r%�__init__�����
�zBatchesWithRawResponse.__init__N�rrrrrs�rjrkrlryr$r$r$r%r��rc@rq)	rnrrrrrscCrtru)rvr�async_to_raw_response_wrapperrHrRrarfrxr$r$r%ry�rzz$AsyncBatchesWithRawResponse.__init__N�rrrrrsr|r$r$r$r%rn�r}rnc@rq)	r(rrrrrscC�:||_t|j�|_t|j�|_t|j�|_t|j�|_dSru)rvrrHrRrarfrxr$r$r%ry�����
�z%BatchesWithStreamingResponse.__init__Nr{r|r$r$r$r%r(�r}r(c@rq)	rorrrrrscCr�ru)rvrrHrRrarfrxr$r$r%ry�r�z*AsyncBatchesWithStreamingResponse.__init__Nrr|r$r$r$r%ro�r}ro).�
__future__r�typingr�typing_extensionsr�httpx�r�typesrr�_typesr	r
rrr
�_utilsrr�_compatr�	_resourcerr�	_responserr�
paginationrr�types.batchr�_base_clientrr�types.shared_params.metadatar�__all__rrrrnr(ror$r$r$r%�<module>s0SS