HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //proc/1233/cwd/usr/local/lib/python3.10/dist-packages/tiktoken/__pycache__/core.cpython-310.pyc
o

;��g~C�@s�ddlmZddlZddlmZddlmZmZmZm	Z	m
Z
mZddlZddl
mZer6ddlZddlmZGdd�d�Zejdd	�ddd��Zddd�ZdS)�)�annotationsN)�ThreadPoolExecutor)�
TYPE_CHECKING�AbstractSet�
Collection�Literal�NoReturn�Sequence)�	_tiktokenc@sdeZdZdd�dddd
�Zdedd�Zdfdd�Ze�dd�dgdd�Ze�dd�dhdd�Zd d!�did&d'�Z	d e�dd(�djd)d*�Z
e�dd�dkd,d-�Zdld0d1�Zdmd5d6�Z
dndod9d:�Zdpd<d=�Zdqd?d@�ZdrdBdC�Zd7d dD�dsdGdH�Zd d!�dtdIdJ�ZdudKdL�ZedvdMdN��ZejdwdPdQ��ZdxdSdT�ZedvdUdV��ZdydWdX�ZdfdYdZ�Zdzd[d\�Zd{d^d_�Zd|dbdc�Z dS)}�EncodingN)�explicit_n_vocab�name�str�pat_str�mergeable_ranks�dict[bytes, int]�special_tokens�dict[str, int]r�
int | NonecCsz||_||_||_||_tt|���t|��dd��|_|r3t|�t|�|ks*J�|j|dks3J�t�	|||�|_
dS)awCreates an Encoding object.

        See openai_public.py for examples of how to construct an Encoding object.

        Args:
            name: The name of the encoding. It should be clear from the name of the encoding
                what behaviour to expect, in particular, encodings with different special tokens
                should have different names.
            pat_str: A regex pattern string that is used to split the input text.
            mergeable_ranks: A dictionary mapping mergeable token bytes to their ranks. The ranks
                must correspond to merge priority.
            special_tokens: A dictionary mapping special token strings to their token values.
            explicit_n_vocab: The number of tokens in the vocabulary. If provided, it is checked
                that the number of mergeable tokens and special tokens is equal to this number.
        r)�default�N)r
�_pat_str�_mergeable_ranks�_special_tokens�max�values�max_token_value�lenr
�CoreBPE�	_core_bpe)�selfr
rrrr�r!�8/usr/local/lib/python3.10/dist-packages/tiktoken/core.py�__init__s�zEncoding.__init__�returncCsd|j�d�S)Nz
<Encoding �>)r
�r r!r!r"�__repr__8szEncoding.__repr__�text�	list[int]cCsBz|j�|�WSty |�dd��dd�}|j�|�YSw)z�Encodes a string into tokens, ignoring special tokens.

        This is equivalent to `encode(text, disallowed_special=())` (but slightly faster).

        ```
        >>> enc.encode_ordinary("hello world")
        [31373, 995]
        �utf-16�
surrogatepass�replace)r�encode_ordinary�UnicodeEncodeError�encode�decode�r r(r!r!r"r-?s	�zEncoding.encode_ordinary�all��allowed_special�disallowed_specialr4�!Literal['all'] | AbstractSet[str]r5� Literal['all'] | Collection[str]cCs�|dkr|j}|dkr|j|}|r*t|t�st|�}t|��|�}r*t|���z|j�||�WSt	yL|�dd��
dd�}|j�||�YSw)aHEncodes a string into tokens.

        Special tokens are artificial tokens used to unlock capabilities from a model,
        such as fill-in-the-middle. So we want to be careful about accidentally encoding special
        tokens, since they can be used to trick a model into doing something we don't want it to do.

        Hence, by default, encode will raise an error if it encounters text that corresponds
        to a special token. This can be controlled on a per-token level using the `allowed_special`
        and `disallowed_special` parameters. In particular:
        - Setting `disallowed_special` to () will prevent this function from raising errors and
          cause all text corresponding to special tokens to be encoded as natural text.
        - Setting `allowed_special` to "all" will cause this function to treat all text
          corresponding to special tokens to be encoded as special tokens.

        ```
        >>> enc.encode("hello world")
        [31373, 995]
        >>> enc.encode("<|endoftext|>", allowed_special={"<|endoftext|>"})
        [50256]
        >>> enc.encode("<|endoftext|>", allowed_special="all")
        [50256]
        >>> enc.encode("<|endoftext|>")
        # Raises ValueError
        >>> enc.encode("<|endoftext|>", disallowed_special=())
        [27, 91, 437, 1659, 5239, 91, 29]
        ```
        r2r*r+r,)�special_tokens_set�
isinstance�	frozenset�_special_token_regex�search�raise_disallowed_special_token�grouprr/r.r0�r r(r4r5�matchr!r!r"r/Os"

�zEncoding.encode�npt.NDArray[np.uint32]cCs||dkr|j}|dkr|j|}|r*t|t�st|�}t|��|�}r*t|���ddl}|j�	||j�}|j
||jd�S)z�Encodes a string into tokens, returning a numpy array.

        Avoids the overhead of copying the token buffer into a Python list.
        r2rN)�dtype)r8r9r:r;r<r=r>�numpyr�encode_to_tiktoken_buffer�
frombuffer�uint32)r r(r4r5r@�np�bufferr!r!r"�encode_to_numpy�s

zEncoding.encode_to_numpy�)�num_threads�	list[str]rK�int�list[list[int]]cCsHt�|j�}t|��}t|�||��Wd�S1swYdS)aDEncodes a list of strings into tokens, in parallel, ignoring special tokens.

        This is equivalent to `encode_batch(text, disallowed_special=())` (but slightly faster).

        ```
        >>> enc.encode_ordinary_batch(["hello world", "goodbye world"])
        [[31373, 995], [11274, 16390, 995]]
        ```
        N)�	functools�partialr-r�list�map)r r(rK�encoder�er!r!r"�encode_ordinary_batch�s

$�zEncoding.encode_ordinary_batch)rKr4r5cCs�|dkr|j}|dkr|j|}t|t�st|�}tj|j||d�}t|��}t|�||��Wd�S1s9wYdS)aEncodes a list of strings into tokens, in parallel.

        See `encode` for more details on `allowed_special` and `disallowed_special`.

        ```
        >>> enc.encode_batch(["hello world", "goodbye world"])
        [[31373, 995], [11274, 16390, 995]]
        ```
        r2r3N)	r8r9r:rOrPr/rrQrR)r r(rKr4r5rSrTr!r!r"�encode_batch�s

�
$�zEncoding.encode_batch�!tuple[list[int], list[list[int]]]cCsb|dkr|j}|dkr|j|}|r*t|t�st|�}t|��|�}r*t|���|j�||�S)a�Encodes a string into stable tokens and possible completion sequences.

        Note that the stable tokens will only represent a substring of `text`.

        See `encode` for more details on `allowed_special` and `disallowed_special`.

        This API should itself be considered unstable.

        ```
        >>> enc.encode_with_unstable("hello fanta")
        ([31373], [(277, 4910), (5113, 265), ..., (8842,)])

        >>> text = "..."
        >>> stable_tokens, completions = enc.encode_with_unstable(text)
        >>> assert text.encode().startswith(enc.decode_bytes(stable_tokens))
        >>> assert all(enc.decode_bytes(stable_tokens + seq).startswith(text.encode()) for seq in completions)
        ```
        r2)	r8r9r:r;r<r=r>r�encode_with_unstabler?r!r!r"rX�s

zEncoding.encode_with_unstable�
text_or_bytes�str | bytescC� t|t�r
|�d�}|j�|�S)aEncodes text corresponding to a single token to its token value.

        NOTE: this will encode all special tokens.

        Raises `KeyError` if the token is not in the vocabulary.

        ```
        >>> enc.encode_single_token("hello")
        31373
        ```
        �utf-8)r9rr/r�encode_single_token�r rYr!r!r"r]�s

zEncoding.encode_single_token�tokens�
Sequence[int]�bytescC�|j�|�S)z�Decodes a list of tokens into bytes.

        ```
        >>> enc.decode_bytes([31373, 995])
        b'hello world'
        ```
        )r�decode_bytes�r r_r!r!r"rcszEncoding.decode_bytesr,�errorscCs|j�|�jd|d�S)auDecodes a list of tokens into a string.

        WARNING: the default behaviour of this function is lossy, since decoded bytes are not
        guaranteed to be valid UTF-8. You can control this behaviour using the `errors` parameter,
        for instance, setting `errors=strict`.

        ```
        >>> enc.decode([31373, 995])
        'hello world'
        ```
        r\�re)rrcr0)r r_rer!r!r"r0szEncoding.decode�tokencCrb)z�Decodes a token into bytes.

        NOTE: this will decode all special tokens.

        Raises `KeyError` if the token is not in the vocabulary.

        ```
        >>> enc.decode_single_token_bytes(31373)
        b'hello'
        ```
        )r�decode_single_token_bytes�r rgr!r!r"rhsz"Encoding.decode_single_token_bytes�list[bytes]cs�fdd�|D�S)z�Decodes a list of tokens into a list of bytes.

        Useful for visualising tokenisation.
        >>> enc.decode_tokens_bytes([31373, 995])
        [b'hello', b' world']
        csg|]}��|��qSr!)rh��.0rgr&r!r"�
<listcomp>3sz0Encoding.decode_tokens_bytes.<locals>.<listcomp>r!rdr!r&r"�decode_tokens_bytes,szEncoding.decode_tokens_bytes�tuple[str, list[int]]c	Csz|�|�}d}g}|D]#}|�td|d|dkodkn��|tdd�|D��7}qd�|�jddd	�}||fS)
a.Decodes a list of tokens into a string and a list of offsets.

        Each offset is the index into text corresponding to the start of each token.
        If UTF-8 character boundaries do not line up with token boundaries, the offset is the index
        of the first character that contains bytes from the token.

        This will currently raise if given tokens that decode to invalid UTF-8; this behaviour may
        change in the future to be more permissive.

        >>> enc.decode_with_offsets([31373, 995])
        ('hello world', [0, 5])
        r��css*�|]}d|krdksndVqdS)rprqrNr!)rl�cr!r!r"�	<genexpr>Hs�(z/Encoding.decode_with_offsets.<locals>.<genexpr>�r\�strictrf)rn�appendr�sum�joinr0)r r_�token_bytes�text_len�offsetsrgr(r!r!r"�decode_with_offsets5s

,zEncoding.decode_with_offsets)rerK�batch�Sequence[Sequence[int]]cCsLtj|j|d�}t|��}t|�||��Wd�S1swYdS)zADecodes a batch (list of lists of tokens) into a list of strings.rfN)rOrPr0rrQrR)r r}rerK�decoderrTr!r!r"�decode_batchNs
$�zEncoding.decode_batchcCs>t|��}t|�|j|��Wd�S1swYdS)z?Decodes a batch (list of lists of tokens) into a list of bytes.N)rrQrRrc)r r}rKrTr!r!r"�decode_bytes_batchVs
$�zEncoding.decode_bytes_batchcCs
|j��S)z*Returns the list of all token byte values.)r�token_byte_valuesr&r!r!r"r�a�
zEncoding.token_byte_valuescCs
|jdS)Nz
<|endoftext|>)rr&r!r!r"�	eot_tokener�zEncoding.eot_token�set[str]cCst|j���S�N)�setr�keysr&r!r!r"r8iszEncoding.special_tokens_set�boolcCst|t�sJ�||jvSr�)r9rM�_special_token_valuesrir!r!r"�is_special_tokenms
zEncoding.is_special_tokencCs
|jdS)zEFor backwards compatibility. Prefer to use `enc.max_token_value + 1`.r)rr&r!r!r"�n_vocabqs
zEncoding.n_vocabcCr[)z�Encodes text corresponding to bytes without a regex split.

        NOTE: this will not encode any special tokens.

        ```
        >>> enc.encode_single_piece("helloqqqq")
        [31373, 38227, 38227]
        ```
        r\)r9rr/r�encode_single_piecer^r!r!r"�_encode_single_piecezs


zEncoding._encode_single_piececCs8t�|j�}g}t�||�D]}|�|j�|��q|S)z?Encodes a string into tokens, but do regex splitting in Python.)�regex�compiler�findall�extendrr�)r r(�_unused_pat�ret�piecer!r!r"�_encode_only_native_bpe�s
z Encoding._encode_only_native_bpecCrbr�)r�
_encode_bytesr1r!r!r"r��szEncoding._encode_bytes�objectcCs8ddl}||jj�|j�ur|jS|j|j|j|jd�S)Nr)r
rrr)�tiktoken.registry�registry�	ENCODINGS�getr
rrr)r �tiktokenr!r!r"�__getstate__�s�zEncoding.__getstate__�value�NonecCs:ddl}t|t�r|j�|�j|_dS|jdi|��dS)Nrr!)r�r9rr��get_encoding�__dict__r#)r r�r�r!r!r"�__setstate__�s

zEncoding.__setstate__)
r
rrrrrrrrr)r$r)r(rr$r))r(rr4r6r5r7r$r))r(rr4r6r5r7r$rA)r(rLrKrMr$rN)
r(rLrKrMr4r6r5r7r$rN)r(rr4r6r5r7r$rW)rYrZr$rM)r_r`r$ra)r,)r_r`rerr$r)rgrMr$ra)r_r`r$rj)r_r`r$ro)r}r~rerrKrMr$rL)r}r~rKrMr$rj)r$rj)r$rM)r$r�)rgrMr$r�)rYrZr$r))r(rar$r))r$r�)r�r�r$r�)!�__name__�
__module__�__qualname__r#r'r-r�r/rIrUrVrXr]rcr0rhrnr|r�r�r��propertyr�rO�cached_propertyr8r�r�r�r�r�r�r�r!r!r!r"rsT�
'
�<��"�
%




	�	�






rrp)�maxsizer_�frozenset[str]r$�'regex.Pattern[str]'cCs&d�dd�|D��}t�d|�d��S)N�|css�|]}t�|�VqdSr�)r��escaperkr!r!r"rs�s�z'_special_token_regex.<locals>.<genexpr>�(�))rxr�r�)r_�innerr!r!r"r;�sr;rgrrcCstd|�d|�d|�d���)Nz;Encountered text corresponding to disallowed special token zo.
If you want this text to be encoded as a special token, pass it to `allowed_special`, e.g. `allowed_special={z�, ...}`.
If you want this text to be encoded as normal text, disable the check for this token by passing `disallowed_special=(enc.special_tokens_set - {zQ})`.
To disable this check for all special tokens, pass `disallowed_special=()`.
)�
ValueError)rgr!r!r"r=�s���r=)r_r�r$r�)rgrr$r)�
__future__rrO�concurrent.futuresr�typingrrrrrr	r�r�r
rCrG�numpy.typing�nptr�	lru_cacher;r=r!r!r!r"�<module>s