HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/langchain/__pycache__/model_laboratory.cpython-310.pyc
o

���g��@svdZddlmZddlmZmZmZddlmZddl	m
Z
ddlmZm
Z
ddlmZddlmZGd	d
�d
�ZdS)z!Experiment with different models.�)�annotations)�List�Optional�Sequence)�BaseLLM��PromptTemplate)�get_color_mapping�
print_text)�Chain��LLMChainc@s8eZdZdZdddd�Ze	dddd��Zddd�ZdS)�ModelLaboratoryzMA utility to experiment with and compare the performance of different models.N�chains�Sequence[Chain]�names�Optional[List[str]]cCs�|D])}t|t�s
td��t|j�dkrtd|j����t|j�dkr+td|j����q|dur<t|�t|�kr<td��||_dd�tt|j��D�}t|�|_	||_
dS)	a�Initialize the ModelLaboratory with chains to experiment with.

        Args:
            chains (Sequence[Chain]): A sequence of chains to experiment with.
            Each chain must have exactly one input and one output variable.
        names (Optional[List[str]]): Optional list of names corresponding to each chain.
            If provided, its length must match the number of chains.


        Raises:
            ValueError: If any chain is not an instance of `Chain`.
            ValueError: If a chain does not have exactly one input variable.
            ValueError: If a chain does not have exactly one output variable.
            ValueError: If the length of `names` does not match the number of chains.
        z�ModelLaboratory should now be initialized with Chains. If you want to initialize with LLMs, use the `from_llms` method instead (`ModelLaboratory.from_llms(...)`)�z;Currently only support chains with one input variable, got z<Currently only support chains with one output variable, got Nz0Length of chains does not match length of names.cS�g|]}t|��qS���str)�.0�irr�E/usr/local/lib/python3.10/dist-packages/langchain/model_laboratory.py�
<listcomp>7�z,ModelLaboratory.__init__.<locals>.<listcomp>)�
isinstancer�
ValueError�len�
input_keys�output_keysr�ranger	�chain_colorsr)�selfrr�chain�chain_rangerrr�__init__s2
������

zModelLaboratory.__init__�llms�
List[BaseLLM]�prompt�Optional[PromptTemplate]�returncsB�durtdgdd���fdd�|D�}dd�|D�}|||d�S)	a�Initialize the ModelLaboratory with LLMs and an optional prompt.

        Args:
            llms (List[BaseLLM]): A list of LLMs to experiment with.
            prompt (Optional[PromptTemplate]): An optional prompt to use with the LLMs.
                If provided, the prompt must contain exactly one input variable.

        Returns:
            ModelLaboratory: An instance of `ModelLaboratory` initialized with LLMs.
        N�_inputz{_input})�input_variables�templatecsg|]}t|�d��qS))�llmr*r�rr0�r*rrrKsz-ModelLaboratory.from_llms.<locals>.<listcomp>cSrrrr1rrrrLr)rr)�clsr(r*rrrr2r�	from_llms;s
zModelLaboratory.from_llms�textr�NonecCsttd|�d��t|j�D]*\}}|jdur|j|}nt|�}t|dd�|�|�}t||jt|�dd�q
dS)a3Compare model outputs on an input text.

        If a prompt was provided with starting the laboratory, then this text will be
        fed into the prompt. If no prompt was provided, then the input text is the
        entire prompt.

        Args:
            text: input text to run all models on.
        zInput:
�
N)�endz

)�colorr8)�print�	enumeraterrrr
�runr#)r$r5rr%�name�outputrrr�compareOs


�zModelLaboratory.compare)N)rrrr)r(r)r*r+r,r)r5rr,r6)�__name__�
__module__�__qualname__�__doc__r'�classmethodr4r?rrrrrs)�rN)rC�
__future__r�typingrrr�#langchain_core.language_models.llmsr�langchain_core.prompts.promptr�langchain_core.utils.inputr	r
�langchain.chains.baser�langchain.chains.llmr
rrrrr�<module>s