HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/langchain/evaluation/__pycache__/loading.cpython-310.pyc
o

���g��@sZUdZddlmZmZmZmZmZmZmZddl	m
Z
ddlmZddl
mZddlmZddlmZddlmZmZdd	lmZmZdd
lmZddlmZmZddlmZdd
l m!Z!ddl"m#Z#m$Z$m%Z%ddl&m'Z'ddl(m)Z)m*Z*m+Z+ddl,m-Z-m.Z.ddl/m0Z0m1Z1de2deefdd�Z3ie)j4e%�e)j5e$�e)j6e#�e)j7e�e)j8e.�e)j9e�e)j:e-�e)j;e�e)j<e�e)j=e�e)j>e1�e)j?e0�e)j@e�e)jAe�e)jBe�e)jCe�e)jDe�e)jEe!e)jFe'e)jGei�ZHee)eee*eeee+ffeId<dd�de)dee
dedeee+ffdd�ZJddd�d ee)dee
d!eeKdedeeee+ff
d"d#�ZLdS)$z Loading datasets and evaluators.�)�Any�Dict�List�Optional�Sequence�Type�Union)�BaseLanguageModel)�Chain)�TrajectoryEvalChain)�PairwiseStringEvalChain)�LabeledPairwiseStringEvalChain)�CriteriaEvalChain�LabeledCriteriaEvalChain)�EmbeddingDistanceEvalChain�"PairwiseEmbeddingDistanceEvalChain)�ExactMatchStringEvaluator)�JsonEqualityEvaluator�JsonValidityEvaluator)�JsonEditDistanceEvaluator)�JsonSchemaEvaluator)�ContextQAEvalChain�CotQAEvalChain�QAEvalChain)�RegexMatchStringEvaluator)�
EvaluatorType�LLMEvalChain�StringEvaluator)�LabeledScoreStringEvalChain�ScoreStringEvalChain)�PairwiseStringDistanceEvalChain�StringDistanceEvalChain�uri�returncCsHzddlm}Wntytd��w|d|���}dd�|dD�S)a�Load a dataset from the `LangChainDatasets on HuggingFace <https://huggingface.co/LangChainDatasets>`_.

    Args:
        uri: The uri of the dataset to load.

    Returns:
        A list of dictionaries, each representing a row in the dataset.

    **Prerequisites**

    .. code-block:: shell

        pip install datasets

    Examples
    --------
    .. code-block:: python

        from langchain.evaluation import load_dataset
        ds = load_dataset("llm-math")
    r)�load_datasetzXload_dataset requires the `datasets` package. Please install with `pip install datasets`zLangChainDatasets/cSsg|]}|�qS�r%)�.0�dr%r%�G/usr/local/lib/python3.10/dist-packages/langchain/evaluation/loading.py�
<listcomp>Fsz load_dataset.<locals>.<listcomp>�train)�datasetsr$�ImportError)r"r$�datasetr%r%r(r$'s��r$�_EVALUATOR_MAPN)�llm�	evaluatorr/�kwargscKs�|tvrtd|�dtt�������t|}t|t�rlz1zddlm}WntyAzddl	m}Wnty>td��wYnw|pJ|dddd�}Wnt
ya}z	td	|�d
��|�d}~ww|jd
d|i|��S|d
i|��S)a<Load the requested evaluation chain specified by a string.

    Parameters
    ----------
    evaluator : EvaluatorType
        The type of evaluator to load.
    llm : BaseLanguageModel, optional
        The language model to use for evaluation, by default None
    **kwargs : Any
        Additional keyword arguments to pass to the evaluator.

    Returns
    -------
    Chain
        The loaded evaluation chain.

    Examples
    --------
    >>> from langchain.evaluation import load_evaluator, EvaluatorType
    >>> evaluator = load_evaluator(EvaluatorType.QA)
    zUnknown evaluator type: z
Valid types are: r)�
ChatOpenAIz�Could not import langchain_openai or fallback onto langchain_community. Please install langchain_openai or specify a language model explicitly. It's recommended to install langchain_openai AND specify a language model explicitly.zgpt-4�*)�model�seed�temperaturezEvaluation with the z� requires a language model to function. Failed to create the default 'gpt-4' model. Please manually provide an evaluation LLM or check your openai credentials.Nr/r%)r.�
ValueError�list�keys�
issubclassr�langchain_openair2r,�&langchain_community.chat_models.openai�	Exception�from_llm)r0r/r1�
evaluator_clsr2�er%r%r(�load_evaluatorcsD
��
����
�
����rA)r/�config�
evaluatorsrBcKsHg}|D]}|r|�|i�ni}|�t|fd|ii|�|����q|S)aeLoad evaluators specified by a list of evaluator types.

    Parameters
    ----------
    evaluators : Sequence[EvaluatorType]
        The list of evaluator types to load.
    llm : BaseLanguageModel, optional
        The language model to use for evaluation, if none is provided, a default
        ChatOpenAI gpt-4 model will be used.
    config : dict, optional
        A dictionary mapping evaluator types to additional keyword arguments,
        by default None
    **kwargs : Any
        Additional keyword arguments to pass to all evaluators.

    Returns
    -------
    List[Chain]
        The loaded evaluators.

    Examples
    --------
    >>> from langchain.evaluation import load_evaluators, EvaluatorType
    >>> evaluators = [EvaluatorType.QA, EvaluatorType.CRITERIA]
    >>> loaded_evaluators = load_evaluators(evaluators, criteria="helpfulness")
    r/)�get�appendrA)rCr/rBr1�loadedr0�_kwargsr%r%r(�load_evaluators�s
!$rH)M�__doc__�typingrrrrrrr�langchain_core.language_modelsr	�langchain.chains.baser
�1langchain.evaluation.agents.trajectory_eval_chainr�langchain.evaluation.comparisonr�*langchain.evaluation.comparison.eval_chainr
�(langchain.evaluation.criteria.eval_chainrr�,langchain.evaluation.embedding_distance.baserr�%langchain.evaluation.exact_match.baser�!langchain.evaluation.parsing.baserr�*langchain.evaluation.parsing.json_distancer�(langchain.evaluation.parsing.json_schemar�langchain.evaluation.qarrr�%langchain.evaluation.regex_match.baser�langchain.evaluation.schemarrr�'langchain.evaluation.scoring.eval_chainrr�)langchain.evaluation.string_distance.baser r!�strr$�QA�COT_QA�
CONTEXT_QA�PAIRWISE_STRING�SCORE_STRING�LABELED_PAIRWISE_STRING�LABELED_SCORE_STRING�AGENT_TRAJECTORY�CRITERIA�LABELED_CRITERIA�STRING_DISTANCE�PAIRWISE_STRING_DISTANCE�EMBEDDING_DISTANCE�PAIRWISE_EMBEDDING_DISTANCE�
JSON_VALIDITY�
JSON_EQUALITY�JSON_EDIT_DISTANCE�JSON_SCHEMA_VALIDATION�REGEX_MATCH�EXACT_MATCHr.�__annotations__rA�dictrHr%r%r%r(�<module>s�$$��������	�
���
������������

�F������