HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/langsmith/evaluation/__pycache__/_runner.cpython-310.pyc
o

���g�\�	@s�dZddlmZddlZddlZddlmZddlZddl	Z	ddl
Z
ddlZddlZddl
Z
ddlZddlZddlZddlZddlZddlmZddlmZmZmZmZmZmZmZmZmZmZm Z m!Z!m"Z"m#Z#m$Z$m%Z%ddl&m'Z'm(Z(ddl)Z)ddl)m*Z+ddl)m,Z-dd	l)m.Z/dd
l)m0Z0ddl)m1Z2ddl3m4Z4dd
l5m6Z6m7Z7m8Z8m9Z9m:Z:m;Z;m<Z<m=Z=m>Z>m?Z?ddl@mAZAer�ddlBZCddlDmEZEeCjFZFneZFe�GeH�ZIe$eeJgeJfeeJeJgeJffZKe$eLejMee0jNe0jOfZPe$e<ee0jQe e0jNge$e:e;ffede$eJe;e:fffZRe$ee0jQe e0jNgee$e:e;fffZSe$eLejMe0jTfZUe(												d�d�d/d0��ZVe(												d�d�d4d0��ZV												d�d�d8d0�ZV						9	d�d�d<d=�ZWGd>d?�d?e'�ZXGd@d.�d.�ZYee!e0jQe e0jNge$e$e7eJfee$e7eJfffZZ			A			9	9d�d�dFdG�Z[GdHd3�d3�Z\d�dMdN�Z]d�dPdQ�Z^											d�d�dTdU�Z_d�dXdY�Z`d�d^d_�Za	9d�d�dadb�Zbd�ddde�Zce#df�Zdd�dhdi�Zee#djdkdl�ZfGdmdk�dk�ZgGdndo�doeg�Zhd�drds�Zid�dvdw�ZjGdxdy�dye'�Zk	9d�d�d�d��Zld�d�d��Zmd9d��d�d�d��Znd�d�d��Zod�d�d��Zpd�d�d��Zqd�d�d��Zrd�d�d��Zsd�d�d��Ztd�d�d��Zu		d�d�d�d��Zv		d�d�d�d��Zwejxdd��d�d�d���Zyd�d�d��ZzdS)�zV2 Evaluation Interface.�)�annotationsN)�copy_context)�
TYPE_CHECKING�Any�	Awaitable�Callable�DefaultDict�Dict�	Generator�Iterable�Iterator�List�Optional�Sequence�Tuple�TypeVar�Union�cast)�	TypedDict�overload)�env)�run_helpers)�	run_trees)�schemas)�utils)�
_warn_once)
�SUMMARY_EVALUATOR_T�ComparisonEvaluationResult�DynamicComparisonRunEvaluator�DynamicRunEvaluator�EvaluationResult�EvaluationResults�RunEvaluator�_normalize_summary_evaluator�comparison_evaluator�
run_evaluator)�LangChainStringEvaluator��Runnable.�T�data�Optional[DATA_T]�
evaluators�Optional[Sequence[EVALUATOR_T]]�summary_evaluators�'Optional[Sequence[SUMMARY_EVALUATOR_T]]�metadata�Optional[dict]�experiment_prefix�
Optional[str]�description�max_concurrency�
Optional[int]�num_repetitions�int�client�Optional[langsmith.Client]�blocking�bool�
experiment�Optional[EXPERIMENT_T]�upload_results�target�'Union[TARGET_T, Runnable, EXPERIMENT_T]�kwargsr�return�ExperimentResultsc
K�dS�N��r@r*r,r.r0r2r4r5r7r9r;r=r?rBrGrG�G/usr/local/lib/python3.10/dist-packages/langsmith/evaluation/_runner.py�evaluate^�rJ�+Optional[Sequence[COMPARATIVE_EVALUATOR_T]]�(Union[Tuple[EXPERIMENT_T, EXPERIMENT_T]]�ComparativeExperimentResultsc
KrErFrGrHrGrGrIrJrrK�IOptional[Union[Sequence[EVALUATOR_T], Sequence[COMPARATIVE_EVALUATOR_T]]]�JUnion[TARGET_T, Runnable, EXPERIMENT_T, Tuple[EXPERIMENT_T, EXPERIMENT_T]]�6Union[ExperimentResults, ComparativeExperimentResults]c
Ksft|ttjtjf�r`|dkt|�|t|�t|�d�}t|���r4dt	dd�|�
�D���d�}t|��t|ttjf�r>|n|j}t
�d|�d��t|ftttt|�||||	|
d	�|
��St|tt	f�r�|dkt|�|t|�t|�d
�}t|�dks�tdd�|D��s�d
|��}t|��t|���r�dt	dd�|�
�D���d�}t|��|dur�||
d<dd�|D�}t
�d|�d��t|fttt|p�d�|||	|d�|
��S|
r�d|
�d�}t|��|s�d}t|��t|�r�t�|�r�d}t|��|�r|�rd|�d|��}t|��|�std�t
�d|�d��t||tttt|�|||||||	|
||d�
S) a Evaluate a target system on a given dataset.

    Args:
        target (TARGET_T | Runnable | EXPERIMENT_T | Tuple[EXPERIMENT_T, EXPERIMENT_T]):
            The target system or experiment(s) to evaluate. Can be a function
            that takes a dict and returns a dict, a langchain Runnable, an
            existing experiment ID, or a two-tuple of experiment IDs.
        data (DATA_T): The dataset to evaluate on. Can be a dataset name, a list of
            examples, or a generator of examples.
        evaluators (Sequence[EVALUATOR_T] | Sequence[COMPARATIVE_EVALUATOR_T] | None):
            A list of evaluators to run on each example. The evaluator signature
            depends on the target type. Default to None.
        summary_evaluators (Sequence[SUMMARY_EVALUATOR_T] | None): A list of summary
            evaluators to run on the entire dataset. Should not be specified if
            comparing two existing experiments. Defaults to None.
        metadata (dict | None): Metadata to attach to the experiment.
            Defaults to None.
        experiment_prefix (str | None): A prefix to provide for your experiment name.
            Defaults to None.
        description (str | None): A free-form text description for the experiment.
        max_concurrency (int | None): The maximum number of concurrent
            evaluations to run. If None then no limit is set. If 0 then no concurrency.
            Defaults to 0.
        client (langsmith.Client | None): The LangSmith client to use.
            Defaults to None.
        blocking (bool): Whether to block until the evaluation is complete.
            Defaults to True.
        num_repetitions (int): The number of times to run the evaluation.
            Each item in the dataset will be run and evaluated this many times.
            Defaults to 1.
        experiment (schemas.TracerSession | None): An existing experiment to
            extend. If provided, experiment_prefix is ignored. For advanced
            usage only. Should not be specified if target is an existing experiment or
            two-tuple fo experiments.
        load_nested (bool): Whether to load all child runs for the experiment.
            Default is to only load the top-level root runs. Should only be specified
            when target is an existing experiment or two-tuple of experiments.
        randomize_order (bool): Whether to randomize the order of the outputs for each
            evaluation. Default is False. Should only be specified when target is a
            two-tuple of existing experiments.

    Returns:
        ExperimentResults: If target is a function, Runnable, or existing experiment.
        ComparativeExperimentResults: If target is a two-tuple of existing experiments.

    Examples:
        Prepare the dataset:

        >>> from typing import Sequence
        >>> from langsmith import Client
        >>> from langsmith.evaluation import evaluate
        >>> from langsmith.schemas import Example, Run
        >>> client = Client()
        >>> dataset = client.clone_public_dataset(
        ...     "https://smith.langchain.com/public/419dcab2-1d66-4b94-8901-0357ead390df/d"
        ... )
        >>> dataset_name = "Evaluate Examples"

        Basic usage:

        >>> def accuracy(run: Run, example: Example):
        ...     # Row-level evaluator for accuracy.
        ...     pred = run.outputs["output"]
        ...     expected = example.outputs["answer"]
        ...     return {"score": expected.lower() == pred.lower()}
        >>> def precision(runs: Sequence[Run], examples: Sequence[Example]):
        ...     # Experiment-level evaluator for precision.
        ...     # TP / (TP + FP)
        ...     predictions = [run.outputs["output"].lower() for run in runs]
        ...     expected = [example.outputs["answer"].lower() for example in examples]
        ...     # yes and no are the only possible answers
        ...     tp = sum([p == e for p, e in zip(predictions, expected) if p == "yes"])
        ...     fp = sum([p == "yes" and e == "no" for p, e in zip(predictions, expected)])
        ...     return {"score": tp / (tp + fp)}
        >>> def predict(inputs: dict) -> dict:
        ...     # This can be any function or just an API call to your app.
        ...     return {"output": "Yes"}
        >>> results = evaluate(
        ...     predict,
        ...     data=dataset_name,
        ...     evaluators=[accuracy],
        ...     summary_evaluators=[precision],
        ...     experiment_prefix="My Experiment",
        ...     description="Evaluating the accuracy of a simple prediction model.",
        ...     metadata={
        ...         "my-prompt-version": "abcd-1234",
        ...     },
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Evaluating over only a subset of the examples

        >>> experiment_name = results.experiment_name
        >>> examples = client.list_examples(dataset_name=dataset_name, limit=5)
        >>> results = evaluate(
        ...     predict,
        ...     data=examples,
        ...     evaluators=[accuracy],
        ...     summary_evaluators=[precision],
        ...     experiment_prefix="My Experiment",
        ...     description="Just testing a subset synchronously.",
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Streaming each prediction to more easily + eagerly debug.

        >>> results = evaluate(
        ...     predict,
        ...     data=dataset_name,
        ...     evaluators=[accuracy],
        ...     summary_evaluators=[precision],
        ...     description="I don't even have to block!",
        ...     blocking=False,
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...
        >>> for i, result in enumerate(results):  # doctest: +ELLIPSIS
        ...     pass

        Using the `evaluate` API with an off-the-shelf LangChain evaluator:

        >>> from langsmith.evaluation import LangChainStringEvaluator
        >>> from langchain_openai import ChatOpenAI
        >>> def prepare_criteria_data(run: Run, example: Example):
        ...     return {
        ...         "prediction": run.outputs["output"],
        ...         "reference": example.outputs["answer"],
        ...         "input": str(example.inputs),
        ...     }
        >>> results = evaluate(
        ...     predict,
        ...     data=dataset_name,
        ...     evaluators=[
        ...         accuracy,
        ...         LangChainStringEvaluator("embedding_distance"),
        ...         LangChainStringEvaluator(
        ...             "labeled_criteria",
        ...             config={
        ...                 "criteria": {
        ...                     "usefulness": "The prediction is useful if it is correct"
        ...                     " and/or asks a useful followup question."
        ...                 },
        ...                 "llm": ChatOpenAI(model="gpt-4o"),
        ...             },
        ...             prepare_data=prepare_criteria_data,
        ...         ),
        ...     ],
        ...     description="Evaluating with off-the-shelf LangChain evaluators.",
        ...     summary_evaluators=[precision],
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Evaluating a LangChain object:

        >>> from langchain_core.runnables import chain as as_runnable
        >>> @as_runnable
        ... def nested_predict(inputs):
        ...     return {"output": "Yes"}
        >>> @as_runnable
        ... def lc_predict(inputs):
        ...     return nested_predict.invoke(inputs)
        >>> results = evaluate(
        ...     lc_predict.invoke,
        ...     data=dataset_name,
        ...     evaluators=[accuracy],
        ...     description="This time we're evaluating a LangChain object.",
        ...     summary_evaluators=[precision],
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

    .. versionchanged:: 0.2.0

        'max_concurrency' default updated from None (no limit on concurrency)
        to 0 (no concurrency at all).
    r))r7r=r?r2r*zReceived invalid arguments. cs��|]	\}}|r|VqdSrFrG��.0�k�vrGrGrI�	<genexpr>R��zevaluate.<locals>.<genexpr>z? should not be specified when target is an existing experiment.z,Running evaluation over existing experiment z...)r,r.r0r5r9r;)r7r=r?r.r*�css$�|]
}t|ttjtjf�VqdSrF)�
isinstance�str�uuid�UUIDr�
TracerSession�rT�trGrGrIrWjs�
�z�Received invalid target. If a tuple is specified it must have length 2 and each element should by the ID or schemas.TracerSession of an existing experiment. Received target=csrRrFrGrSrGrGrIrWvrXzA should not be specified when target is two existing experiments.Nr5cSs&g|]}t|ttjf�r|n|j�qSrG)rZr[r\r]�idr_rGrGrI�
<listcomp>|s&zevaluate.<locals>.<listcomp>z6Running pairwise evaluation over existing experiments rG)r,r2r4r9r0zReceived unsupported arguments zC. These arguments are not supported when creating a new experiment.zDMust specify 'data' when running evaluations over a target function.z�Async functions are not supported by `evaluate`. Please use `aevaluate` instead:

from langsmith import aevaluate

await aevaluate(
    async_target_function,
    data=data,
    evaluators=evaluators,
    # ... other parameters
)zeExpected at most one of 'experiment' or 'experiment_prefix', but both were provided. Got: experiment=z, experiment_prefix=z&'upload_results' parameter is in beta.z&Running evaluation over target system )r*r,r.r0r2r4r5r7r9r;r=r?)rZr[r\r]rr^r<�any�values�tuple�items�
ValueErrorra�logger�debug�evaluate_existingrrr�EVALUATOR_T�list�len�all�evaluate_comparative�COMPARATIVE_EVALUATOR_T�callable�rh�is_asyncr�	_evaluate)r@r*r,r.r0r2r4r5r7r9r;r=r?rB�invalid_args�msg�	target_id�
target_idsrGrGrIrJ�s�B������
������
����	
������F�load_nested�,Union[str, uuid.UUID, schemas.TracerSession]cs^|ptjdd�}t||�}t|||d�}	t||���fdd�|	D�}
t|	|
|||||||d�	S)a�Evaluate existing experiment runs.

    Args:
        experiment (Union[str, uuid.UUID]): The identifier of the experiment to evaluate.
        data (DATA_T): The data to use for evaluation.
        evaluators (Optional[Sequence[EVALUATOR_T]]): Optional sequence of evaluators to use for individual run evaluation.
        summary_evaluators (Optional[Sequence[SUMMARY_EVALUATOR_T]]): Optional sequence of evaluators
            to apply over the entire dataset.
        metadata (Optional[dict]): Optional metadata to include in the evaluation results.
        max_concurrency (int | None): The maximum number of concurrent
            evaluations to run. If None then no limit is set. If 0 then no concurrency.
            Defaults to 0.
        client (Optional[langsmith.Client]): Optional Langsmith client to use for evaluation.
        load_nested: Whether to load all child runs for the experiment.
            Default is to only load the top-level root runs.
        blocking (bool): Whether to block until evaluation is complete.

    Returns:
        ExperimentResults: The evaluation results.

    Environment:
        - LANGSMITH_TEST_CACHE: If set, API calls will be cached to disk to save time and
            cost during testing. Recommended to commit the cache files to your repository
            for faster CI/CD runs.
            Requires the 'langsmith[vcr]' package to be installed.

    Examples:
        >>> from langsmith.evaluation import evaluate, evaluate_existing
        >>> dataset_name = "Evaluate Examples"
        >>> def predict(inputs: dict) -> dict:
        ...     # This can be any function or just an API call to your app.
        ...     return {"output": "Yes"}
        >>> # First run inference on the dataset
        ... results = evaluate(
        ...     predict,
        ...     data=dataset_name,
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...
        >>> # Then apply evaluators to the experiment
        ... def accuracy(run: Run, example: Example):
        ...     # Row-level evaluator for accuracy.
        ...     pred = run.outputs["output"]
        ...     expected = example.outputs["answer"]
        ...     return {"score": expected.lower() == pred.lower()}
        >>> def precision(runs: Sequence[Run], examples: Sequence[Example]):
        ...     # Experiment-level evaluator for precision.
        ...     # TP / (TP + FP)
        ...     predictions = [run.outputs["output"].lower() for run in runs]
        ...     expected = [example.outputs["answer"].lower() for example in examples]
        ...     # yes and no are the only possible answers
        ...     tp = sum([p == e for p, e in zip(predictions, expected) if p == "yes"])
        ...     fp = sum([p == "yes" and e == "no" for p, e in zip(predictions, expected)])
        ...     return {"score": tp / (tp + fp)}
        >>> experiment_name = (
        ...     results.experiment_name
        ... )  # Can use the returned experiment name
        >>> experiment_name = "My Experiment:64e6e91"  # Or manually specify
        >>> results = evaluate_existing(
        ...     experiment_name,
        ...     summary_evaluators=[precision],
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...
    )i Ni�_)�
timeout_ms�rycsg|]}�ttj|j��qSrG)rr\r]�reference_example_id�rT�run��data_maprGrIrb	sz%evaluate_existing.<locals>.<listcomp>)r*r,r.r0r5r9r;r=)�rt�get_cached_client�_load_experiment�_load_traces�_load_examples_maprt)r=r,r.r0r5r9ryr;�project�runsr*rGr�rIrj�s J

�rjc@s&eZdZUded<ded<ded<dS)�ExperimentResultRow�schemas.Runr�schemas.Example�exampler!�evaluation_resultsN��__name__�
__module__�__qualname__�__annotations__rGrGrGrIr�s
r�c@steZdZdZd$d%dd�Zed&dd��Zd'dd�Zd(dd�Zd)dd�Z		d*d+dd�Z
d&dd�Zd&d d!�Zd(d"d#�Z
dS),rDa�Represents the results of an evaluate() call.

    This class provides an iterator interface to iterate over the experiment results
    as they become available. It also provides methods to access the experiment name,
    the number of results, and to wait for the results to be processed.

    Methods:
        experiment_name() -> str: Returns the name of the experiment.
        wait() -> None: Waits for the experiment data to be processed.
    T�experiment_manager�_ExperimentManagerr;r<cCsT||_g|_t��|_t��|_|s!tj|j	d�|_
|j
��dSd|_
|�	�dS)N�r@)�_manager�_results�queue�Queue�_queue�	threading�Event�_processing_complete�Thread�
_process_data�_thread�start)�selfr�r;rGrGrI�__init__)s

�zExperimentResults.__init__rCr[cCs|jjSrF)r��experiment_name�r�rGrGrIr�7�z!ExperimentResults.experiment_name�Iterator[ExperimentResultRow]ccs��d}|j��r|j��r|t|j�krNz|t|j�kr'|j|V|d7}n|jjddd�Wn
tjy:Yqw|j��r|j��r|t|j�ksdSdS)Nrr)Tg�������?)�block�timeout)	r��is_setr��emptyrmr��getr��Empty)r��ixrGrGrI�__iter__;s*���
�����zExperimentResults.__iter__�NonecCsTt�}|j��}||�D]}|j�|�|j�|�q|j��}||_|j	�
�dSrF)�
_load_tqdmr��get_resultsr��putr��append�get_summary_scores�_summary_resultsr��set)r��tqdm�results�item�summary_scoresrGrGrIr�Ks

zExperimentResults._process_datar8cCs
t|j�SrF)rmr�r�rGrGrI�__len__Ws
zExperimentResults.__len__rNr�r6�end�	DataFramecCst|j||d�S)N�r�r�)�
_to_pandasr�)r�r�r�rGrGrI�	to_pandasZszExperimentResults.to_pandascCs2ddl}|jr|j�d�r|��}|��S|��S)Nr�pandas)�importlib.utilr��util�	find_specr��_repr_html_�__repr__)r��	importlib�dfrGrGrIr�_s
zExperimentResults._repr_html_cCsd|j�d�S)Nz<ExperimentResults �>)r�r�rGrGrIr�hszExperimentResults.__repr__cCs|jr
|j��dSdS)z�Wait for the evaluation runner to complete.

        This method blocks the current thread until the evaluation runner has
        finished its execution.
        N)r��joinr�rGrGrI�waitks�zExperimentResults.wait)T)r�r�r;r<�rCr[)rCr��rCr�)rCr8�rN)r�r6r�r6rCr�)r�r�r��__doc__r��propertyr�r�r�r�r�r�r�r�rGrGrGrIrDs


�

	��!Sequence[COMPARATIVE_EVALUATOR_T]�randomize_order�experiments�!Tuple[EXPERIMENT_T, EXPERIMENT_T]c	&
sPt|�dkr
td��|std��|dkrtd���pt����fdd�|D�}	dd�|	D�}
tt|
��d	ks:td
��dd�|	D�}|dur_d
d�|	D�}d�|�dtt��j	dd��}
n|dtt��j	dd��}
t��}�j
|
||||d��ttt
tjtjft|	������fdd�|D�}d}|D]}dd�|D�}|dur�|}q�||M}q�|dur�t|�ng}dd�|D�}d}i}tdt|�|�D]#}||||�}�j|	dj|	dj�d�|d�D]}|||j<q�q�t�t�}|D]}|D]}|j|v�r|ttj|j��|�q�q�dd�|�pgD�}i}d*���fd%d&�}t�}tj|�p/d	d'��h} g}!||� ��D]W\}"}d(|i||"<|D]-}#|d	k�r`| �!||||"|#| �}$|!�|$��qG||||"|#| �}%|%||"d)|%j"��<�qG|!�r�t#�$|!�|!D]}$|$�%�}%|%||"d)|%j"��<�q�q;Wd�n	1�s�wYt&||�S)+a�!Evaluate existing experiment runs against each other.

    This lets you use pairwise preference scoring to generate more
    reliable feedback in your experiments.

    Args:
        experiments (Tuple[Union[str, uuid.UUID], Union[str, uuid.UUID]]):
            The identifiers of the experiments to compare.
        evaluators (Sequence[COMPARATIVE_EVALUATOR_T]):
            A list of evaluators to run on each example.
        experiment_prefix (Optional[str]): A prefix to provide for your experiment name.
            Defaults to None.
        description (Optional[str]): A free-form text description for the experiment.
        max_concurrency (int): The maximum number of concurrent evaluations to run.
            Defaults to 5.
        client (Optional[langsmith.Client]): The LangSmith client to use.
            Defaults to None.
        metadata (Optional[dict]): Metadata to attach to the experiment.
            Defaults to None.
        load_nested (bool): Whether to load all child runs for the experiment.
            Default is to only load the top-level root runs.
        randomize_order (bool): Whether to randomize the order of the outputs for each evaluation.
            Default is False.

    Returns:
        ComparativeExperimentResults: The results of the comparative evaluation.

    Examples:
        Suppose you want to compare two prompts to see which one is more effective.
        You would first prepare your dataset:

        >>> from typing import Sequence
        >>> from langsmith import Client
        >>> from langsmith.evaluation import evaluate
        >>> from langsmith.schemas import Example, Run
        >>> client = Client()
        >>> dataset = client.clone_public_dataset(
        ...     "https://smith.langchain.com/public/419dcab2-1d66-4b94-8901-0357ead390df/d"
        ... )
        >>> dataset_name = "Evaluate Examples"

        Then you would run your different prompts:
        >>> import functools
        >>> import openai
        >>> from langsmith.evaluation import evaluate
        >>> from langsmith.wrappers import wrap_openai
        >>> oai_client = openai.Client()
        >>> wrapped_client = wrap_openai(oai_client)
        >>> prompt_1 = "You are a helpful assistant."
        >>> prompt_2 = "You are an exceedingly helpful assistant."
        >>> def predict(inputs: dict, prompt: str) -> dict:
        ...     completion = wrapped_client.chat.completions.create(
        ...         model="gpt-3.5-turbo",
        ...         messages=[
        ...             {"role": "system", "content": prompt},
        ...             {
        ...                 "role": "user",
        ...                 "content": f"Context: {inputs['context']}"
        ...                 f"\n\ninputs['question']",
        ...             },
        ...         ],
        ...     )
        ...     return {"output": completion.choices[0].message.content}
        >>> results_1 = evaluate(
        ...     functools.partial(predict, prompt=prompt_1),
        ...     data=dataset_name,
        ...     description="Evaluating our basic system prompt.",
        ...     blocking=False,  # Run these experiments in parallel
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...
        >>> results_2 = evaluate(
        ...     functools.partial(predict, prompt=prompt_2),
        ...     data=dataset_name,
        ...     description="Evaluating our advanced system prompt.",
        ...     blocking=False,
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...
        >>> results_1.wait()
        >>> results_2.wait()
        >>> import time
        >>> time.sleep(10)  # Wait for the traces to be fully processed

            Finally, you would compare the two prompts directly:
        >>> import json
        >>> from langsmith.evaluation import evaluate_comparative
        >>> def score_preferences(runs: list, example: schemas.Example):
        ...     assert len(runs) == 2  # Comparing 2 systems
        ...     assert isinstance(example, schemas.Example)
        ...     assert all(run.reference_example_id == example.id for run in runs)
        ...     pred_a = runs[0].outputs["output"]
        ...     pred_b = runs[1].outputs["output"]
        ...     ground_truth = example.outputs["answer"]
        ...     tools = [
        ...         {
        ...             "type": "function",
        ...             "function": {
        ...                 "name": "rank_preferences",
        ...                 "description": "Saves the prefered response ('A' or 'B')",
        ...                 "parameters": {
        ...                     "type": "object",
        ...                     "properties": {
        ...                         "reasoning": {
        ...                             "type": "string",
        ...                             "description": "The reasoning behind the choice.",
        ...                         },
        ...                         "preferred_option": {
        ...                             "type": "string",
        ...                             "enum": ["A", "B"],
        ...                             "description": "The preferred option, either 'A' or 'B'",
        ...                         },
        ...                     },
        ...                     "required": ["preferred_option"],
        ...                 },
        ...             },
        ...         }
        ...     ]
        ...     completion = openai.Client().chat.completions.create(
        ...         model="gpt-3.5-turbo",
        ...         messages=[
        ...             {"role": "system", "content": "Select the better response."},
        ...             {
        ...                 "role": "user",
        ...                 "content": f"Option A: {pred_a}"
        ...                 f"\n\nOption B: {pred_b}"
        ...                 f"\n\nGround Truth: {ground_truth}",
        ...             },
        ...         ],
        ...         tools=tools,
        ...         tool_choice={
        ...             "type": "function",
        ...             "function": {"name": "rank_preferences"},
        ...         },
        ...     )
        ...     tool_args = completion.choices[0].message.tool_calls[0].function.arguments
        ...     loaded_args = json.loads(tool_args)
        ...     preference = loaded_args["preferred_option"]
        ...     comment = loaded_args["reasoning"]
        ...     if preference == "A":
        ...         return {
        ...             "key": "ranked_preference",
        ...             "scores": {runs[0].id: 1, runs[1].id: 0},
        ...             "comment": comment,
        ...         }
        ...     else:
        ...         return {
        ...             "key": "ranked_preference",
        ...             "scores": {runs[0].id: 0, runs[1].id: 1},
        ...             "comment": comment,
        ...         }
        >>> def score_length_difference(runs: list, example: schemas.Example):
        ...     # Just return whichever response is longer.
        ...     # Just an example, not actually useful in real life.
        ...     assert len(runs) == 2  # Comparing 2 systems
        ...     assert isinstance(example, schemas.Example)
        ...     assert all(run.reference_example_id == example.id for run in runs)
        ...     pred_a = runs[0].outputs["output"]
        ...     pred_b = runs[1].outputs["output"]
        ...     if len(pred_a) > len(pred_b):
        ...         return {
        ...             "key": "length_difference",
        ...             "scores": {runs[0].id: 1, runs[1].id: 0},
        ...         }
        ...     else:
        ...         return {
        ...             "key": "length_difference",
        ...             "scores": {runs[0].id: 0, runs[1].id: 1},
        ...         }
        >>> results = evaluate_comparative(
        ...     [results_1.experiment_name, results_2.experiment_name],
        ...     evaluators=[score_preferences, score_length_difference],
        ...     client=client,
        ... )  # doctest: +ELLIPSIS
        View the pairwise evaluation results at:...
        >>> eval_results = list(results)
        >>> assert len(eval_results) >= 10  # doctest: +SKIP
        >>> assert all(
        ...     "feedback.ranked_preference" in r["evaluation_results"]
        ...     for r in eval_results
        ... )  # doctest: +SKIP
        >>> assert all(
        ...     "feedback.length_difference" in r["evaluation_results"]
        ...     for r in eval_results
        ... )  # doctest: +SKIP
    rYz7Comparative evaluation requires at least 2 experiments.z>At least one evaluator is required for comparative evaluation.rz+max_concurrency must be a positive integer.csg|]}t|���qSrG)r��rTr=)r9rGrIrbP�z(evaluate_comparative.<locals>.<listcomp>cS�g|]}t|j��qSrG)r[�reference_dataset_id�rT�prGrGrIrbQr�r)z5All experiments must have the same reference dataset.cS�g|]}|j�qSrG�rar�rGrGrIrbT�NcSsg|]
}|jdur|j�qSrF��namer�rGrGrIrbV�z vs. �-��)r�r4r0racsg|]	}t|��d��qS)r|)r�r�)r9ryrGrIrbks��cSsh|]}|j�qSrG)r}r~rGrGrI�	<setcomp>rr�z'evaluate_comparative.<locals>.<setcomp>cSsg|]}|dur|�qSrFrG)rT�eidrGrGrIrbz��c�dataset_version)�
dataset_id�as_of�example_idscSsg|]}t|��qSrG)r$)rT�	evaluatorrGrGrIrb�s�	runs_list�list[schemas.Run]r�r��
comparatorr�executor�cf.ExecutorrCrcs�t��}�rt�|�tjd�d��|�||���dur!td��Wd�n1s+wYt�j	t
�r@�fdd��jD�n�j	pDi}�j��D]\}}|j
�j|�j||�t
|���j�j|d�qJ�S)Nr,)�project_namer9z&Client is required to submit feedback.csi|]}t|��j�qSrG)r[�comment)rT�rid��resultrGrI�
<dictcomp>�r�zNevaluate_comparative.<locals>.evaluate_and_submit_feedback.<locals>.<dictcomp>)�run_id�key�scorer��comparative_experiment_id�
source_run_id�feedback_group_id)r\�uuid4�random�shufflerr�tracing_context�compare_runsrgrZr�r[�scoresrf�submit�create_feedbackr�r�rar�)r�r�r�r�r��commentsr�r�)r9�comparative_experimentr�r�rI�evaluate_and_submit_feedback�s2
��
���
z:evaluate_comparative.<locals>.evaluate_and_submit_feedback��max_workersr��	feedback.)
r�r�r�r�r�rr�r�rCr)'rmrgr�r�r�r�r[r\r�hex�create_comparative_experiment�#_print_comparative_experiment_startrrr�TracerSessionResultrerl�range�
list_examplesr�r0r�ra�collections�defaultdictr}r]r�r��ls_utils�ContextThreadPoolExecutorrfrr��cfr�r�rN)&r�r,r2r4r5r9r0ryr��projects�
ref_datasets_�experiment_ids�experiment_namesr�r�r��examples_intersectionr��example_ids_set�example_ids_nullabler��
batch_sizer*�i�example_ids_batch�e�	runs_dictr�comparatorsr�r
r�r��futures�
example_idr��futurer�rG)r9r	ryr�rIro�s�E�"�����
�
��
���
��
���
roc@s.eZdZdZ	d
ddd�Zd	d
�Zdd�ZdS)rNa�Represents the results of an evaluate_comparative() call.

    This class provides an iterator interface to iterate over the experiment results
    as they become available. It also provides methods to access the experiment name,
    the number of results, and to wait for the results to be processed.

    Methods:
        experiment_name() -> str: Returns the name of the experiment.
        wait() -> None: Waits for the experiment data to be processed.
    Nr��dict�examples�*Optional[Dict[uuid.UUID, schemas.Example]]cCs||_||_dSrF)r��	_examples)r�r�r*rGrGrIr��s
z%ComparativeExperimentResults.__init__cCs
|j|S)z0Return the result associated with the given key.)r�)r�r�rGrGrI�__getitem__�s
z(ComparativeExperimentResults.__getitem__ccs6�|j��D]\}}|jr|j|nd|d�VqdS)N)r�r�)r�rfr,)r�r��valuerGrGrIr��s�
��z%ComparativeExperimentResults.__iter__rF)r�r)r*r+)r�r�r�r�r�r-r�rGrGrGrIrN�s��3Tuple[schemas.TracerSession, schemas.TracerSession]r	�schemas.ComparativeExperimentr�cCs~|djp	|dj}|r=|�d�d}|j}|�d�d}|�d|�dd�dd	�|D���d
|j��}td|�d��dSdS)
Nrr)�?�/projects/p/�
/datasets/�/compare?selectedSessions=z%2CcSr�rG)r[ra�rTr#rGrGrIrb�r�z7_print_comparative_experiment_start.<locals>.<listcomp>z&comparativeExperiment=z)View the pairwise evaluation results at:
�

)�url�splitr�r�ra�print)r�r	r7�project_urlr��base_url�comparison_urlrGrGrIr�s���
��r�0Union[TARGET_T, Iterable[schemas.Run], Runnable]cCst|�pt|�SrF)rq�_is_langchain_runnabler�rGrGrI�_is_callablesr?�DATA_T�6Optional[Union[schemas.TracerSession, str, uuid.UUID]]c
Cs|	pt��}	t|�rdntttj|�}
t||
|	�\}}
t||	||p#||||
t	|�p/t
|�dk|d�	��}t�
d�}|rGt�|�|j�d�nd}tj||	jgd��.t|�ra|jtt|�|d�}|rj|j||d�}|rq|�|�}t||
d�}|Wd�S1s�wYdS)Nr)r9r0r=r4r7r��include_attachmentsr?z.yaml)�ignore_hosts�r5)r;)r�r�r?rrr�Run�_resolve_experimentr��_include_attachments�_evaluators_include_attachmentsr�r�
get_cache_dir�pathlib�Pathr��with_optional_cache�api_url�with_predictions�TARGET_T�with_evaluators�with_summary_evaluatorsrD)r@r*r,r.r0r2r4r5r7r9r;r=r?r��experiment_�manager�	cache_dir�
cache_pathr�rGrGrIrtsL�
�
�
�
��
$�rtr.r[cC�&zt�|�WdStyYdSw�NTF�r\r]rg�r.rGrGrI�_is_uuidF�
�rZr��EXPERIMENT_T�langsmith.Client�schemas.TracerSessioncCs<t|tj�r|St|tj�st|�r|j|d�S|j|d�S)N��
project_id)r�)rZrr^r\r]rZ�read_project)r�r9rGrGrIr�Ns
r��List[schemas.Run]cCs�|rdnd}t|tj�r|j|j|d�}nt|tj�st|�r'|j||d�}n|j||d�}|s4t|�St	�
t�}g}i}|D]}|jdurO||j�|�n|�|�|||j<q?|�
�D]\}	}
t|
dd�d�||	_q^|S)z'Load nested traces for a given project.NT)r`�is_root)r�rccSs|jSrF)�dotted_order��rrGrGrI�<lambda>ssz_load_traces.<locals>.<lambda>)r�)rZrr^�	list_runsrar\r]rZrlrr�
parent_run_idr�rf�sorted�
child_runs)r�r9ryrcr��treemapr��all_runsrr�rkrGrGrIr�Ys&


r�� Dict[uuid.UUID, schemas.Example]cCs"dd�|j|j|j�d�d�D�S)NcSsi|]}|j|�qSrGr�r5rGrGrIr�zs��z&_load_examples_map.<locals>.<dictcomp>r�)r�r�)rr�r0r�)r9r�rGrGrIr�ws
��r��IT�Callable[[IT], IT]cCs.z	ddlm}W|Stydd�YSw)Nr�r�cSs|SrFrG)�xrGrGrIrg�sz_load_tqdm.<locals>.<lambda>)�	tqdm.autor��ImportErrorrqrGrGrIr��s��r��ET�_ExperimentManagerMixin)�boundc@s\eZdZ			d#d$d
d�Zed%dd��Zd&dd�Zdd�Zd'dd�Zd(dd�Z	d)d!d"�Z
dS)*rvNr=�+Optional[Union[schemas.TracerSession, str]]r0r1r9r:r4r3cCs�|pt��|_d|_|durt�|_n t|t�r)|dtt�	�j
dd��|_n
tt|j�|_||_|p6i}|�
d�sGdt���
d�i|�}|pJi|_||_dS)Nr�r��revision_id)r�r�r9�_experiment�_get_random_name�_experiment_namerZr[r\rrrr�r��ls_env�get_langchain_env_var_metadata�	_metadata�_description)r�r=r0r9r4rGrGrIr��s$

"

���

z _ExperimentManagerMixin.__init__rCr[cCs|jdur|jStd��)Nz=Experiment name not provided, and experiment not yet started.)r|rgr�rGrGrIr��s

�z'_ExperimentManagerMixin.experiment_namer^cCs|jdur	td��|jS)N�Experiment not started yet.)rzrgr�rGrGrI�_get_experiment�s
z'_ExperimentManagerMixin._get_experimentcCs@|jpi}t��}|ri|�d|i�}|jri|jj�|�}|S)N�git)rr}�get_git_inforzr0)r��project_metadata�git_inforGrGrI�_get_experiment_metadata�s
����z0_ExperimentManagerMixin._get_experiment_metadatar��	uuid.UUIDr)c
Cs||j}d}t|�D],}z|jj|j|j||d�WStjy5|�dtt�	�j
dd����|_Yq	wtd|�d���)N�
)r4r�r0r��z+Could not find a unique experiment name in z= attempts. Please try again with a different experiment name.)r|rr9�create_projectr�r�LangSmithConflictErrorr[r\rrrg)r�r�r0�
starting_name�num_attempts�_rGrGrI�_create_experiment�s �&�
�z*_ExperimentManagerMixin._create_experiment�
first_exampler�cCs.|jdur|��}|�|j|�}|S|j}|SrF)rzr�r�r�)r�r�r�r�rGrGrI�_get_project�s
��z$_ExperimentManagerMixin._get_projectr��Optional[schemas.TracerSession]r�cCsp|r0|jr0|j�d�d}|j}|�d�d}|�d|�d|j��}td|j�d|�d��dStd	|j�dS)
Nr1rr2r3r4z-View the evaluation results for experiment: 'z' at:
r6z%Starting evaluation of experiment: %s)r7r8r�rar9r�)r�r�r�r:r�r;r<rGrGrI�_print_experiment_start�s 
��
���z/_ExperimentManagerMixin._print_experiment_start)NNN)r=rxr0r1r9r:r4r3r�)rCr^)r�r�r0r)rCr^)r�r�rCr^)r�r�r�r�rCr�)r�r�r�r�r�r�r�r�r�r�r�rGrGrGrIrv�s�



cseZdZdZ											d]d^�fdd�
Zd_d!d"�Zed`d$d%��Zedad'd(��Zedbd*d+��Z	edcd-d.��Z
ddd/d0�Z	dedfd5d6�Zdd7�dgd:d;�Z
dhd>d?�ZdidAdB�ZdjdDdE�Z		dkdldGdH�ZdmdNdO�Z	dedndPdQ�ZdodSdT�ZdpdUdV�ZdqdXdY�Zdrd[d\�Z�ZS)sr�aRManage the execution of experiments.

    Supports lazily running predictions and evaluations in parallel to facilitate
    result streaming and early debugging.

    Args:
        data (DATA_T): The data used for the experiment. Can be a dataset name or ID OR
            a generator of examples.
        num_repetitions (int): The number of times to run over the data.
        runs (Optional[Iterable[schemas.Run]]): The runs associated with the experiment
            predictions.
        experiment (Optional[schemas.TracerSession]): The tracer session
            associated with the experiment.
        experiment_prefix (Optional[str]): The prefix for the experiment name.
        metadata (Optional[dict]): Additional metadata for the experiment.
        client (Optional[langsmith.Client]): The Langsmith client used for
             the experiment.
        evaluation_results (Optional[Iterable[EvaluationResults]]): The evaluation
            sresults for the experiment.
        summary_results (Optional[Iterable[EvaluationResults]]): The aggregate results
            for the experiment.
    Nr)FTr=rxr0r1r9r:r��Optional[Iterable[schemas.Run]]r��%Optional[Iterable[EvaluationResults]]�summary_resultsr4r3r7r8rBr<�reuse_attachmentsr?�attachment_raw_data_dictr*r@csTt�j||||d�||_d|_||_||_||_|	|_|
|_||_	||_
|
|_dS)N)r=r0r9r4)�superr��_datar,�_runs�_evaluation_resultsr��_num_repetitionsrG�_reuse_attachments�_upload_results�_attachment_raw_data_dict)r�r*r=r0r9r�r�r�r4r7rBr�r?r���	__class__rGrIr�s �
z_ExperimentManager.__init__r�r�rCcCs�t|d�r|js
|Si}|j��D]/\}}|jdur<t|j�||jvr<|dt�|jt|j�|�|dd�||<q|||<qtj	|j|j
|j|j|j
|j|j|j|j||j|jd�S)a�Reset attachment readers for an example.

        This is only in the case that an attachment is going to be used by more
        than 1 callable (target + evaluators). In that case we keep a single copy
        of the attachment data in self._attachment_raw_data_dict, and create
        readers from that data. This makes it so that we don't have to keep
        copies of the same data in memory, instead we can just create readers
        from the same data.
        �attachmentsN�
presigned_url�	mime_type)r��readerr�)ra�
created_atr��inputs�outputsr0�modified_atr�r�r��	_host_url�
_tenant_id)�hasattrr�rfr�r[ra�io�BytesIOr�Exampler�r�r�r�r0r�r�r�r�r�)r�r��new_attachmentsr��
attachmentrGrGrI�!_reset_example_attachment_readers;s6
��
�z4_ExperimentManager._reset_example_attachment_readers�Iterable[schemas.Example]cs��jdurEt�j�j�jd��_�jr)�jdur)t��j�\}�_dd�|D��_�j	dkrEt
�j��tj���fdd�t
�j	�D���_t��j�\�_}|S)N)r9rBcSs<i|]}|jpi��D]\}}t|j�||d���qqS)r�)r�rfr[ra�read)rTr#r�r.rGrGrIr�ts���z/_ExperimentManager.examples.<locals>.<dictcomp>r)c3s"�|]}�fdd��D�VqdS)csg|]}��|��qSrG)r��rTr�r�rGrIrb|s��z9_ExperimentManager.examples.<locals>.<genexpr>.<listcomp>NrG�rTr���
examples_listr�rGrIrW{s�
��
�z._ExperimentManager.examples.<locals>.<genexpr>)r,�
_resolve_datar�r9rGr�r��	itertools�teer�rl�chain�
from_iterabler)r��
examples_copy�
examples_iterrGr�rIr*js$
��


�z_ExperimentManager.examplesr[cCsD|jdust|jdd�stt|j��}t|j�Stttj	|j�j
�S)Nr�)rz�getattr�next�iterr*r[r�rrrr�)r�r�rGrGrIr��s�
�z_ExperimentManager.dataset_id�Iterable[EvaluationResults]cCs |jdur
dd�|jD�S|jS)Ncss�|]}dgiVqdS)r�NrGr�rGrGrIrW���z8_ExperimentManager.evaluation_results.<locals>.<genexpr>)r�r*r�rGrGrIr��s
z%_ExperimentManager.evaluation_results�Iterable[schemas.Run]cCs(|jdur	td��t�|j�\|_}|S)Nz;Runs not provided in this experiment. Please predict first.)r�rgr�r�)r��	runs_iterrGrGrIr��s
�z_ExperimentManager.runscCsntt�|jd��}|jr|�|�nd}|�||�|j|jd<|j	|j||j|j
|j|j|j
|j|j|jd�
S)Nr)r7)	r=r0r9r�r�rBr�r?r�)r�r��islicer*r�r�r�r�rr�r9r�r�rGr�r�)r�r�r�rGrGrIr��s �z_ExperimentManager.startr5r6r@rOc	Csbt�}|j|j||t|�d�}t�|d�\}}tdd�|D�|j|j|j	dd�|D�|j
|jd�S)z3Lazily apply the target function to the experiment.)r5rBrYcs��|]}|dVqdS�r�NrG�rT�predrGrGrIrW�r�z6_ExperimentManager.with_predictions.<locals>.<genexpr>csr��rNrGr�rGrGrIrW�r�)r=r0r9r�r?rB)rr�_predictrGr�r�r�rzrr9r�)r�r@r5�context�_experiment_results�r1�r2rGrGrIrN�s"��z#_ExperimentManager.with_predictionsrDr,�*Sequence[Union[EVALUATOR_T, RunEvaluator]]cCsvt|�}t�}|j|j||d�}t�|d�\}}}tdd�|D�|j|j|j	dd�|D�dd�|D�|j
|j|jd�	S)z7Lazily apply the provided evaluators to the experiment.rD�csr�r�rG�rTr�rGrGrIrW�r�z5_ExperimentManager.with_evaluators.<locals>.<genexpr>csr�r�rGr�rGrGrIrW�r�csr�)r�NrGr�rGrGrIrW�r��r=r0r9r�r�r�rBr?)
�_resolve_evaluatorsrr�_scorer�r�r�rzrr9r�rGr�)r�r,r5r��experiment_resultsr�r��r3rGrGrIrP�s"��z"_ExperimentManager.with_evaluatorsr.�Sequence[SUMMARY_EVALUATOR_T]cCsFt|�}t�}|�|j|�}t|j|j|j|j|j	|j
||j|jd�	S)z?Lazily apply the provided summary evaluators to the experiment.r�)
�_wrap_summary_evaluatorsrr�_apply_summary_evaluatorsr�r*rzrr9r�r�rGr�)r�r.�wrapped_evaluatorsr��aggregate_feedback_genrGrGrIrQ�s ��z*_ExperimentManager.with_summary_evaluators�Iterable[ExperimentResultRow]ccs4�t|j|j|j�D]
\}}}t|||d�Vq
dS)z?Return the traces, evaluation results, and associated examples.�rr�r�N)�zipr�r*r�r�)r�rr�r�rGrGrIr�s��
��z_ExperimentManager.get_results�Dict[str, List[dict]]cCs&|jdur	dgiSddd�|jD�iS)zCIf summary_evaluators were applied, consume and return the results.Nr�cSsg|]}|dD]}|�qqS�r�rG)rTr��resrGrGrIrbs���z9_ExperimentManager.get_summary_scores.<locals>.<listcomp>)r�r�rGrGrIr�s
��z%_ExperimentManager.get_summary_scores�&Generator[_ForwardResults, None, None]c	#s��t|��|dkr�jD]}t�|�j�j�j�j��Vqn/t�|��"�����fdd��jD�}t	�
|�D]}|��Vq7Wd�n1sIwY���dS)z(Run the target function on the examples.rcs,g|]}��t�|�j�j�j�j���qSrG)r�_forwardr�rr9r�r��r��fnrBr�rGrIrb3s���z/_ExperimentManager._predict.<locals>.<listcomp>N)
�_ensure_traceabler*r�r�rr9r�rrr�as_completedr��_end)r�r@r5rBr�r&r(rGr�rIr�s.�
���
��z_ExperimentManager._predict�Sequence[RunEvaluator]�current_resultsr�r��cf.ThreadPoolExecutorcst��}i|dp
i�|j|dj|djd��}tjdii|�d||js'dnd|jd������|d�|d}|d	}|D]�}z |j�|d
�}	|d�|j�	|	��|jr`|jj
|	�|d�Wnpty��zdz*t|�}
t
��fd
d�|
D�d�}|d�|j�	|��|jr�|jj
|�|d�Wnty�}zt�d|���WYd}~nd}~wwtjdt|��d�r��jnd�dt����dd�WYd��nd��ww|jdur�|jD]}
|j|
d}|�d�q�q?t�||d�Wd�S1s�wYdS)Nr0r�r)r=r}�reference_run_idr,�localT)r�r0�enabledr9r��rr�r�)r�	_executorcs&g|]}t|�jt��ddid��qS)�errorT)r�r�r��extra)r ra�repr)rTr��r#rrGrIrbts���z6_ExperimentManager._run_evaluators.<locals>.<listcomp>r�zError parsing feedback keys: zError running evaluator z on run ��: ��exc_infor�rr�rG)rr�get_tracing_contextr�rarr�r9�evaluate_run�extend�_select_eval_results�_log_evaluation_feedback�	Exception�_extract_feedback_keysr!rhrir�r�r��seekr�)r�r,r�r��current_contextr0r��eval_resultsr��evaluator_response�
feedback_keys�error_response�e2r�r�rGr�rI�_run_evaluatorsEs�
������	�
�����
����������

��$�z"_ExperimentManager._run_evaluatorsc	cs�tj|pdd��q}|dkr#t�}|��D]}|�|j|||�VqnGt�}|��D]0}|�|�|j|||��zt	j
|dd�D]}|��V|�|�q@Wq*t	j
t
fyZYq*wt	�
|�D]}|��}|Vq`Wd�dSWd�dS1s}wYdS)z�Run the evaluators on the prediction stream.

        Expects runs to be available in the manager.
        (e.g. from a previous prediction step)
        r)rrg����MbP?)r�N)rrrr�rrr��addrrr�r��remove�TimeoutError)	r�r,r5r�r�r�r&r(r�rGrGrIr��sP�
�����
����"�z_ExperimentManager._score�(Generator[EvaluationResults, None, None]c
cs��gg}}t|j|j�D]\}}|�|�|�|�q
g}t����}|jr+|��jnd}t	�
�}	i|	dp7i�|j|d��}
t	jdii|	�d|
|j
|jsOdndd�����i|D]^}z;|||�}|j
j||jd�}
|�|
�|jr�|
D] }|jdhd	�}|�d
d�}|j|j
jfi|�d||d���qtWqYty�}ztjdt|��d
|��dd�WYd}~qYd}~wwWd�n1s�wYWd�n1s�wYd|iVdS)Nr0)r=�
experiment_idr,r�T)r�r0r9r�)�fn_name�
target_run_id)�exclude�evaluator_info)r�r`�source_infoz Error running summary evaluator r�r�r�rG)r�r�r*r�rrr�r�rarrr�r�rr9rr�rr)�poprrrrhr�r�)r�r.r�r*rr��aggregate_feedbackr�r`rr0r��summary_eval_result�flattened_resultsr��feedbackrr#rGrGrIr��sx�



������	
�
�����������,z,_ExperimentManager._apply_summary_evaluatorscCs8t|j�}dd�|D�}|rt|�nd}|r|��SdS)NcSsg|]}|jr|j�qSrG)r�)rT�exrGrGrIrb�r�z;_ExperimentManager._get_dataset_version.<locals>.<listcomp>)rlr*�max�	isoformat)r�r*r��max_modified_atrGrGrI�_get_dataset_version�s
z'_ExperimentManager._get_dataset_version�Optional[list[str]]cCstt|j�}t�}|D]+}|jr0|j�d�r0t|jdt�r0|jdD]}t|t�r.|�|�q"q
|�d�q
t|�S)N�
dataset_split�base)rlr*r�r0r�rZr[r
)r�r*�splitsr�r8rGrGrI�_get_dataset_splitss 
�
��

��z&_ExperimentManager._get_dataset_splitsr�cCs`|jsdS|j}|durtd��|��}|��|d<|��|d<|jj|ji|j	�|�d�dS)Nr�r��dataset_splits)r0)
r�rzrgr�r r%r9�update_projectrar0)r�r=r�rGrGrIr�s ��
�z_ExperimentManager._end)NNNNNNr)FFTN)r=rxr0r1r9r:r�r�r�r�r�r�r4r3r7r8rBr<r�r<r?r<r�r1r*r@)r�r�rCr�)rCr�r�)rCr�)rCr�)rCr�rF)r5r6r@rOrCr�)r,r�r5r6rCr�)r.r�rCr�)rCr�)rCr�)NF)r5r6rBr<r@rOrCr�)r,r�r�r�r�r�rCr�)r,r�r5r6rCr�)r.r�rCr)rCr3)rCr!r�)r�r�r�r�r�r�r�r*r�r�r�r�rNrPrQr�r�r�rr�r�r r%r��
__classcell__rGrGr�rIr�sR�
"/

�#�
 

�
*W�
-
6
r��8Sequence[Union[EVALUATOR_T, RunEvaluator, AEVALUATOR_T]]r�cCsPg}|D]!}t|t�r|�|�qt|t�r|�|���q|�t|��q|SrF)rZr"r�r&�as_run_evaluatorr%)r,r�r�rGrGrIr�(s

r�r��List[SUMMARY_EVALUATOR_T]cCs*ddd�}g}|D]	}|�||��q	|S)Nr�rrCcs2t�dd��t���t���d��fd	d
��}|S)Nr��BatchEvaluatorr��Sequence[schemas.Run]r*�Sequence[schemas.Example]rC�*Union[EvaluationResult, EvaluationResults]cs@tj�d�d���fdd��}|d	t���d
�dt���d
��S)
Nr��runs_r[�	examples_rCr/cs�t��t���SrF)rl)r0r1)r�r*r�rGrI�_wrapper_super_innerAsz]_wrap_summary_evaluators.<locals>._wrap.<locals>._wrapper_inner.<locals>._wrapper_super_innerzRuns[] (Length=�)zExamples[] (Length=)r0r[r1r[rCr/)rr�	traceablerm)r�r*r2��	eval_namer�)r*r�rI�_wrapper_inner=s

�z?_wrap_summary_evaluators.<locals>._wrap.<locals>._wrapper_inner)r�r-r*r.rCr/)r�r#�	functools�wraps)r�r7rGr5rI�_wrap9s

z'_wrap_summary_evaluators.<locals>._wrap)r�rrCr)r�)r,r:r�r�rGrGrIr�6s

r�c@seZdZUded<ded<dS)�_ForwardResultsr�rr�r�Nr�rGrGrGrIr;Ss
r;r��rh.SupportsLangsmithExtrar�r�r�r)rBcs2d�d�fdd�}tj|sdndd	��||jr|j��n|j��}tj|j||i|�d
|i�|d�}	z.|r;|j|jfn|jf}
||
d|	i�|r_|jdur_|jD]}|j|d
}|�	d�qPWnt
y}}
ztjd|
��ddd�WYd}
~
nd}
~
wwt
ttj��|d�Wd�S1s�wYdS)Nrf�
rt.RunTreerCr�cs|�dSrFrGre�rrGrI�_get_runcr�z_forward.<locals>._get_runr�T)r��example_version)r}�on_endr�r0r9�langsmith_extrar�rzError running target function: r))r��
stacklevelr�)rfr=rCr�)rrrr�rr��LangSmithExtrarar�r�rrrhr�r;rrrE)r�r�r�r0r9r?rBr?r@rB�argsr�r�r#rGr>rIr�XsJ	�������
����$�r�cCrVrWrXrYrGrGrI�_is_valid_uuid�r[rF)rBr�cCsxt|tj�r
|j||d�St|t�r t|�r |jt�|�|d�St|t�r,|j||d�St|tj�r:|j|j|d�S|S)z*Return the examples for the given dataset.)r�rB)�dataset_namerB)	rZr\r]rr[rFr�Datasetra)r*r9rBrGrGrIr��s"�
�
��r��=TARGET_T | rh.SupportsLangsmithExtra[[dict], dict] | Runnable�'rh.SupportsLangsmithExtra[[dict], dict]cCsJt|�std��t�|�r|}|St|�r|j}tjdd�tt|��}|S)z(Ensure the target function is traceable.z�Target must be a callable function or a langchain/langgraph object. For example:

def predict(inputs: dict) -> dict:
    # do work, like chain.invoke(inputs)
    return {...}

evaluate(
    predict,
    ...
)�Targetr�)	r?rgrr�is_traceable_functionr>�invoker4rr)r@r�rGrGrIr��s�
�r��4Optional[Sequence[Union[EVALUATOR_T, AEVALUATOR_T]]]cs,|durdSd
dd��t�fdd	�|D��S)Nrr�rrCr<cSsDt|�sdSt�|�}t|j���}dd�|D�}tdd�|D��S)NFcS�"g|]
}|j|j|jfvr|�qSrG��kind�POSITIONAL_ONLY�POSITIONAL_OR_KEYWORDr�rGrGrIrb��zW_evaluators_include_attachments.<locals>.evaluator_uses_attachments.<locals>.<listcomp>css�|]}|jdkVqdS)r�Nr�r�rGrGrIrW�s�zV_evaluators_include_attachments.<locals>.evaluator_uses_attachments.<locals>.<genexpr>)rq�inspect�	signaturerl�
parametersrdrc)r��sig�params�positional_paramsrGrGrI�evaluator_uses_attachments�s
�zC_evaluators_include_attachments.<locals>.evaluator_uses_attachmentsc3s�|]}�|�VqdSrFrGr5�r[rGrIrW�r�z2_evaluators_include_attachments.<locals>.<genexpr>)r�rrCr<)�sum)r,rGr\rIrH�s

rHcCs�t|�st|�s
dSt�|�}t|j���}dd�|D�}dd�|D�}t|�dkr.td��t|�dkr8td��t|�dkrWd	d�|D�d
dgkrUtdd
d�|D�����dSdd�|dd�D�d
dgkS)z0Whether the target function accepts attachments.FcSrOrGrPr�rGrGrIrb�rTz(_include_attachments.<locals>.<listcomp>cSsg|]
}|j|jur|�qSrG)�defaultr�r�rGrGrIrb�r�rzFTarget function must accept at least one positional argument (inputs).rYz`Target function must accept at most two arguments without default values: (inputs, attachments).cSr�rGr�r�rGrGrIrb�r�r�r�zlWhen passing 2 positional arguments, they must be named 'inputs' and 'attachments', respectively. Received: cSr�rGr�r�rGrGrIrb�r�TcSr�rGr�r�rGrGrIrb�r�N)	r>rqrUrVrlrWrdrmrg)r@rXrYrZ�positional_no_defaultrGrGrIrG�s2
�����rGr�r��STuple[Optional[Union[schemas.TracerSession, str]], Optional[Iterable[schemas.Run]]]cCs�|dur$t|tj�r
|}nt||�}|jstd��|js td��||fS|durEt�|�\}}t	|�}|j
|jd�}|jsAtd��||fSdS)Nz,Experiment name must be defined if provided.zOExperiment must have an associated reference_dataset_id, but none was provided.r_z,Experiment name not found for provided runs.)NN)rZrr^r�r�rgr�r�r�r�ra�
session_id)r=r�r9rRr0�	first_runrGrGrIrF�s&
�rFcCsddlm}|�S)Nr��random_name)�%langsmith.evaluation._name_generationrdrcrGrGrIr{sr{r�r"cCs`t|t�rt|dd�rt|j�St|dd�rt|j�St|d�r.tt|d�dd�r.|jjgSgS)N�func�afuncr��evaluation_name)	rZrr��%_extract_code_evaluator_feedback_keysrfrgr�r�rh)r�rGrGrIr"s




rrfr�	list[str]csRt�|�}dd�}dd���fdd�}t�|�}z�t�|�}|jd}t|tjtj	f�s/gWSi}g}t�
|�D][}t|tj�rit|jtj
�rhg}	|jjD]	}
|	��|
��qMt|jdtj�rh|	||jdj<q8t|tj�r�|jdur�||j�}�|j�}||j|�}
|�|�|�|�|�|
�q8|r�|WS|jgWSty�gYSw)NcSst|tj�rKg}d}t|j|j�D]2\}}t|tjtjf�rCt|tj�r'|jn|j	}|dkrCt|tjtjf�rCt|tj�r@|jn|j	}q|rI|gS|St|tj
�r�t|jtj�r�|jj
dkr�|jD]&}|jdkr�t|j	tjtjf�r�t|j	tj�r�|j	jgS|j	j	gSqagS)Nr�r))rZ�astr	r��keysrd�Str�Constant�sr.�Callrf�Namera�keywords�arg)�noderl�	key_valuer�r.�key_str�keywordrGrGrI�extract_dict_keys3s6��
��
�����z@_extract_code_evaluator_feedback_keys.<locals>.extract_dict_keyscSs~t|tj�r=t|jtj�r=|jjdkr=|jD]&}|jdkr<t|jtj	tj
f�r<t|jtj	�r5|jjgS|jjgSqgS)Nr r�)rZrkrprfrqrarrrsr.rmrnro)rtrwrGrGrI�extract_evaluation_result_keyQs 
��
�����zL_extract_code_evaluator_feedback_keys.<locals>.extract_evaluation_result_keyc	s�t|tj�rLt|jtj�rL|jjdkrL|jD]3}|jdkrIt|jtj�r.|�	|jjg�St|jtj
�rIg}|jjD]	}|��|��q;|SqgSt|tj
�r�t|j|j�D]�\}}t|tjtjf�r�|jdkr�t|tj
�r�g}|jD]l}t|tj
�r�t|j|j�D]!\}}t|tjtjf�r�|jdkr�t|tjtjf�r�|�|j�q�qvt|tj�r�t|jtj�r�|jjdvr�|jD]#}|jdkr�t|jtjtjf�r�|�t|jtj�r�|jjn|jj�q�qv|SqYgS)Nr!r�r�)r r))rZrkrprfrqrarrrsr.r�r
�eltsrr	r�rlrdrmrnror�)	rt�	variablesrwrl�eltr�r.�elt_key�	elt_value�ryrGrI�extract_evaluation_results_keysdsf
��

�!�
�
���

��
�
����zN_extract_code_evaluator_feedback_keys.<locals>.extract_evaluation_results_keysr)rU�	getsource�textwrap�dedentrk�parse�bodyrZ�FunctionDef�AsyncFunctionDef�walk�Assignr.r
rzr�targetsrqra�Returnr��SyntaxError)rf�python_coderxr��tree�function_defr{rlrt�	list_keysr|�	dict_keys�eval_result_key�eval_results_keysrGrrIri0sF

1

�

�


��rir��list[ExperimentResultRow]r�r�c
CsDzddl}Wnty}ztd�|�d}~ww|�t|||d��S)Nrz�The 'pandas' library is required to use the 'to_pandas' function. Please install it using 'pip install pandas' or 'conda install pandas' before calling this method.r�)r�rtr��_flatten_experiment_results)r�r�r��pdr#rGrGrIr��s����r�cCsdd�|||�D�S)NcSs�g|]b}idd�|dj��D��dd�|djpi��D��d|dji�|djdur9dd�|dj��D�ni�dd�|d	d
D��|djrW|dj|dj��nd|dj|djd���qS)cS�i|]
\}}d|��|�qS)zinputs.rGrSrGrGrIr��r�z:_flatten_experiment_results.<locals>.<listcomp>.<dictcomp>r�cSr�)zoutputs.rGrSrGrGrIr��r�rr�NcSr�)z
reference.rGrSrGrGrIr��r�cSs,i|]}d|j��|jdur|jn|j�qS)r
N)r�r�r.)rTrfrGrGrIr��s��r�r�)�execution_timer'ra)	r�rfr�r��end_time�
start_time�
total_secondsr}ra)rTrrrGrGrIrb�s0���
���	
�����z/_flatten_experiment_results.<locals>.<listcomp>rG)r�r�r�rGrGrIr��s
�r�)�maxsize�Optional[type]cCs(z	ddlm}|WStyYdSw)Nrr')�langchain_core.runnablesr(rtr'rGrGrI�_import_langchain_runnable�s�r��ocCstt�}o
t||��SrF)r<r�rZ)r�r(rGrGrIr>�sr>)NNNNNNrr)NTNT)r*r+r,r-r.r/r0r1r2r3r4r3r5r6r7r8r9r:r;r<r=r>r?r<r@rArBrrCrD)r*r+r,rLr.r/r0r1r2r3r4r3r5r6r7r8r9r:r;r<r=r>r?r<r@rMrBrrCrN)r*r+r,rOr.r/r0r1r2r3r4r3r5r6r7r8r9r:r;r<r=r>r?r<r@rPrBrrCrQ)NNNrNFT)r,r-r.r/r0r1r5r6r9r:ryr<r;r<r=rzrCrD)NNr�NNFF)r,r�r2r3r4r3r5r8r9r:r0r1ryr<r�r<r�r�rCrN)r�r/r	r0rCr�)r@r=rCr<)NNNNNNr)NTNT)r*r@r,r-r.r/r0r1r2r3r4r3r5r6r7r8r9r:r;r<r=rAr?r<r@r=rCrD)r.r[rCr<)r�r\r9r]rCr^)F)r�rzr9r]ryr<rCrb)r9r]r�r^rCrn)rCrp)r,r)rCr�)r,r�rCr+)r�r<r�r�r�r[r0r)r9r]r?r<rBr<rCr;)r*r@r9r]rBr<rCr�)r@rIrCrJ)r,rNrCr8)r@rrCr<)r=rAr�r�r9r]rCr`r�)r�r")rfrrCrjr�)r�r�r�r6r�r6)rCr�)r�rrCr<){r��
__future__rrkr�concurrent.futuresr&rr8rUr�r��loggingrJr�rr�r�r\�contextvarsr�typingrrrrrr	r
rrr
rrrrrr�typing_extensionsrr�	langsmithrr}rrrrr�rrr�#langsmith._internal._beta_decoratorr�langsmith.evaluation.evaluatorrrrrr r!r"r#r$r%�!langsmith.evaluation.integrationsr&r�r�r�r(r��	getLoggerr�rhr)rOr[r]r�rHr@rErk�AEVALUATOR_Tr^r\rJrjr�rDrprorNrr?rtrZr�r�r�ror�rurvr�r�r�r;r�rFr�r�rHrGrFr{rrir�r��	lru_cacher�r>rGrGrGrI�<module>shH0
"
����������:�\[
����
�O
#
�
>
�

p
+
�
1�



#
!


��