HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //lib/python3/dist-packages/pip/_vendor/pygments/formatters/__pycache__/other.cpython-310.pyc
o

@%Ne��@sxdZddlmZddlmZddlmZddlmZgd�Z	Gdd�de�Z
Gd	d
�d
e�ZdZdZ
Gd
d�de�ZdS)z�
    pygments.formatters.other
    ~~~~~~~~~~~~~~~~~~~~~~~~~

    Other formatters: NullFormatter, RawTokenFormatter.

    :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�)�	Formatter)�get_choice_opt)�Token)�colorize)�
NullFormatter�RawTokenFormatter�TestcaseFormatterc@s*eZdZdZdZddgZdgZdd�ZdS)	rz;
    Output the text unchanged without any formatting.
    z	Text only�text�nullz*.txtcCs8|j}|D]\}}|r|�|�|��q|�|�qdS�N)�encoding�write�encode)�self�tokensource�outfile�enc�ttype�value�r�G/usr/lib/python3/dist-packages/pip/_vendor/pygments/formatters/other.py�formats�zNullFormatter.formatN)�__name__�
__module__�__qualname__�__doc__�name�aliases�	filenamesrrrrrrsrc@s6eZdZdZdZddgZdgZdZdd�Zd	d
�Z	dS)ra}
    Format tokens as a raw representation for storing token streams.

    The format is ``tokentype<TAB>repr(tokenstring)\n``. The output can later
    be converted to a token stream with the `RawTokenLexer`, described in the
    :doc:`lexer list <lexers>`.

    Only two options are accepted:

    `compress`
        If set to ``'gz'`` or ``'bz2'``, compress the output with the given
        compression algorithm after encoding (default: ``''``).
    `error_color`
        If set to a color name, highlight error tokens using that color.  If
        set but with no value, defaults to ``'red'``.

        .. versionadded:: 0.11

    z
Raw tokens�raw�tokensz*.rawFcKs�tj|fi|��d|_t|dgd�d�|_|�dd�|_|jdur%d|_|jdurBz	t|jd�WdStyAt	d|j��wdS)	N�ascii�compress)��none�gz�bz2r#�error_colorT�redzInvalid color %r specified)
r�__init__rrr"�getr'r�KeyError�
ValueError�r�optionsrrrr)>s"�

���zRawTokenFormatter.__init__c
s
z��d�Wntytd��w|jdkr+ddl}|�ddd����j}�j}n#|jdkrHddl}|�d����fd	d
�}��fdd�}n�j}�j}|j	rq|D]\}}d
||f}	|t
jurk|t|j	|	��qS||	�qSn|D]\}}|d
||f�qs|�dS)N�z3The raw tokens formatter needs a binary output filer%rr#�wb�	r&cs����|��dSr)r
r")r	��
compressorrrrr
`sz'RawTokenFormatter.format.<locals>.writecs���������dSr)r
�flushrr2rrr4csz'RawTokenFormatter.format.<locals>.flushs%r	%r
)
r
�	TypeErrorr"�gzip�GzipFile�closer&�
BZ2Compressorr4r'r�Errorr)
rrrr6r
r4r&rr�linerr2rrPs6�




�
zRawTokenFormatter.formatN)
rrrrrrr�
unicodeoutputr)rrrrrr$srzG    def testNeedsName(lexer):
        fragment = %r
        tokens = [
zD        ]
        assert list(lexer.get_tokens(fragment)) == tokens
c@s*eZdZdZdZdgZdd�Zdd�ZdS)	rzU
    Format tokens as appropriate for a new testcase.

    .. versionadded:: 2.0
    �Testcase�testcasecKs6tj|fi|��|jdur|jdkrtd��dSdS)N�utf-8z*Only None and utf-8 are allowed encodings.)rr)rr,r-rrrr)�s�zTestcaseFormatter.__init__cCs�d}g}g}|D]\}}|�|�|�d|||f�qtd�|�f}d�|�}	t}
|jdur:|�||	|
�n|�|�d��|�|	�d��|�|
�d��|��dS)Nz            z%s(%s, %r),
r#r?)�append�TESTCASE_BEFORE�join�TESTCASE_AFTERrr
rr4)rrr�indentation�rawbuf�outbufrr�before�during�afterrrrr�s


zTestcaseFormatter.formatN)rrrrrrr)rrrrrr�srN)r�pip._vendor.pygments.formatterr�pip._vendor.pygments.utilr�pip._vendor.pygments.tokenr�pip._vendor.pygments.consoler�__all__rrrArCrrrrr�<module>s
S