HEX
Server: Apache/2.4.52 (Ubuntu)
System: Linux spn-python 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64
User: arjun (1000)
PHP: 8.1.2-1ubuntu2.20
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/langsmith/evaluation/__pycache__/_arunner.cpython-310.pyc
o

���g���@sbdZddlmZddlZddlmZddlZddlZddl	Z	ddl
Z
ddlmZm
Z
mZmZmZmZmZmZmZmZmZmZmZmZddlZddlmZddlmZmZddlmZddlm Z!dd	l"m#Z$dd
l%m&Z&ddl'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-m.Z.m/Z/m0Z0m1Z1m2Z2m3Z3m4Z4m5Z5m6Z6m7Z7m8Z8m9Z9m:Z:ddl;m<Z<m=Z=m>Z>m?Z?er�ddl@ZAdd
lBmCZCeAjDZDne
ZDe�EeF�ZGeeeHgeeHfeeHeHgeeHffZI												dXdYd,d-�ZJ						.	dZd[d1d2�ZK											d\d]d5d6�ZLGd7d8�d8e-�ZMGd9d+�d+�ZN	.d^d_dDdE�ZOd`dGdH�ZPd.dI�dadKdL�ZQedM�ZRdbdQdR�ZSdcdVdW�ZTdS)dzV2 Evaluation Interface.�)�annotationsN)�
TYPE_CHECKING�Any�
AsyncIterable�
AsyncIterator�	Awaitable�Callable�Dict�Iterable�List�Optional�Sequence�TypeVar�Union�cast)�run_helpers)�	run_trees�schemas)r)�utils)�_aiter)�
_warn_once)�AEVALUATOR_T�DATA_T�EVALUATOR_T�ExperimentResultRow�_evaluators_include_attachments�_ExperimentManagerMixin�_extract_feedback_keys�_ForwardResults�_include_attachments�_is_langchain_runnable�_load_examples_map�_load_experiment�
_load_tqdm�_load_traces�
_resolve_data�_resolve_evaluators�_resolve_experiment�
_to_pandas�_wrap_summary_evaluators)�SUMMARY_EVALUATOR_T�EvaluationResult�EvaluationResults�RunEvaluator)�Runnable�T�data�NUnion[DATA_T, AsyncIterable[schemas.Example], Iterable[schemas.Example], None]�
evaluators�4Optional[Sequence[Union[EVALUATOR_T, AEVALUATOR_T]]]�summary_evaluators�'Optional[Sequence[SUMMARY_EVALUATOR_T]]�metadata�Optional[dict]�experiment_prefix�
Optional[str]�description�max_concurrency�
Optional[int]�num_repetitions�int�client�Optional[langsmith.Client]�blocking�bool�
experiment�6Optional[Union[schemas.TracerSession, str, uuid.UUID]]�upload_results�target�VUnion[ATARGET_T, AsyncIterable[dict], Runnable, str, uuid.UUID, schemas.TracerSession]�kwargsr�return�AsyncExperimentResultsc
�sb�t|ttjtjf�r]|dkt|�|t|�t|�d�}t|���r5dt	dd�|�
�D���d�}t|��t|ttjf�r?|n|j}t
�d|�d��t|f|||||	|
d	�|
��Id
HSt|tt	f�rjd}t|��|
rvd|
�d
�}t|��|s~d}t|��|r�|r�d|�d|��}t|��|s�td�t
�d|�d��t||||||||||	|
||d�
Id
HS)a%Evaluate an async target system on a given dataset.

    Args:
        target (AsyncCallable[[dict], dict] | AsyncIterable[dict] | Runnable | EXPERIMENT_T | Tuple[EXPERIMENT_T, EXPERIMENT_T]):
            The target system or experiment(s) to evaluate. Can be an async function
            that takes a dict and returns a dict, a langchain Runnable, an
            existing experiment ID, or a two-tuple of experiment IDs.
        data (Union[DATA_T, AsyncIterable[schemas.Example]]): The dataset to evaluate on. Can be a dataset name, a list of
            examples, an async generator of examples, or an async iterable of examples.
        evaluators (Optional[Sequence[EVALUATOR_T]]): A list of evaluators to run
            on each example. Defaults to None.
        summary_evaluators (Optional[Sequence[SUMMARY_EVALUATOR_T]]): A list of summary
            evaluators to run on the entire dataset. Defaults to None.
        metadata (Optional[dict]): Metadata to attach to the experiment.
            Defaults to None.
        experiment_prefix (Optional[str]): A prefix to provide for your experiment name.
            Defaults to None.
        description (Optional[str]): A description of the experiment.
        max_concurrency (int | None): The maximum number of concurrent
            evaluations to run. If None then no limit is set. If 0 then no concurrency.
            Defaults to 0.
        num_repetitions (int): The number of times to run the evaluation.
            Each item in the dataset will be run and evaluated this many times.
            Defaults to 1.
        client (Optional[langsmith.Client]): The LangSmith client to use.
            Defaults to None.
        blocking (bool): Whether to block until the evaluation is complete.
            Defaults to True.
        experiment (Optional[schemas.TracerSession]): An existing experiment to
            extend. If provided, experiment_prefix is ignored. For advanced
            usage only.
        load_nested: Whether to load all child runs for the experiment.
            Default is to only load the top-level root runs. Should only be specified
            when evaluating an existing experiment.

    Returns:
        AsyncIterator[ExperimentResultRow]: An async iterator over the experiment results.

    Environment:
        - LANGSMITH_TEST_CACHE: If set, API calls will be cached to disk to save time and
            cost during testing. Recommended to commit the cache files to your repository
            for faster CI/CD runs.
            Requires the 'langsmith[vcr]' package to be installed.

    Examples:
        >>> from typing import Sequence
        >>> from langsmith import Client, aevaluate
        >>> from langsmith.schemas import Example, Run
        >>> client = Client()
        >>> dataset = client.clone_public_dataset(
        ...     "https://smith.langchain.com/public/419dcab2-1d66-4b94-8901-0357ead390df/d"
        ... )
        >>> dataset_name = "Evaluate Examples"

        Basic usage:

        >>> def accuracy(run: Run, example: Example):
        ...     # Row-level evaluator for accuracy.
        ...     pred = run.outputs["output"]
        ...     expected = example.outputs["answer"]
        ...     return {"score": expected.lower() == pred.lower()}

        >>> def precision(runs: Sequence[Run], examples: Sequence[Example]):
        ...     # Experiment-level evaluator for precision.
        ...     # TP / (TP + FP)
        ...     predictions = [run.outputs["output"].lower() for run in runs]
        ...     expected = [example.outputs["answer"].lower() for example in examples]
        ...     # yes and no are the only possible answers
        ...     tp = sum([p == e for p, e in zip(predictions, expected) if p == "yes"])
        ...     fp = sum([p == "yes" and e == "no" for p, e in zip(predictions, expected)])
        ...     return {"score": tp / (tp + fp)}

        >>> import asyncio
        >>> async def apredict(inputs: dict) -> dict:
        ...     # This can be any async function or just an API call to your app.
        ...     await asyncio.sleep(0.1)
        ...     return {"output": "Yes"}
        >>> results = asyncio.run(
        ...     aevaluate(
        ...         apredict,
        ...         data=dataset_name,
        ...         evaluators=[accuracy],
        ...         summary_evaluators=[precision],
        ...         experiment_prefix="My Experiment",
        ...         description="Evaluate the accuracy of the model asynchronously.",
        ...         metadata={
        ...             "my-prompt-version": "abcd-1234",
        ...         },
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Evaluating over only a subset of the examples using an async generator:

        >>> async def example_generator():
        ...     examples = client.list_examples(dataset_name=dataset_name, limit=5)
        ...     for example in examples:
        ...         yield example
        >>> results = asyncio.run(
        ...     aevaluate(
        ...         apredict,
        ...         data=example_generator(),
        ...         evaluators=[accuracy],
        ...         summary_evaluators=[precision],
        ...         experiment_prefix="My Subset Experiment",
        ...         description="Evaluate a subset of examples asynchronously.",
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Streaming each prediction to more easily + eagerly debug.

        >>> results = asyncio.run(
        ...     aevaluate(
        ...         apredict,
        ...         data=dataset_name,
        ...         evaluators=[accuracy],
        ...         summary_evaluators=[precision],
        ...         experiment_prefix="My Streaming Experiment",
        ...         description="Streaming predictions for debugging.",
        ...         blocking=False,
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        >>> async def aenumerate(iterable):
        ...     async for elem in iterable:
        ...         print(elem)
        >>> asyncio.run(aenumerate(results))

        Running without concurrency:

        >>> results = asyncio.run(
        ...     aevaluate(
        ...         apredict,
        ...         data=dataset_name,
        ...         evaluators=[accuracy],
        ...         summary_evaluators=[precision],
        ...         experiment_prefix="My Experiment Without Concurrency",
        ...         description="This was run without concurrency.",
        ...         max_concurrency=0,
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Using Async evaluators:

        >>> async def helpfulness(run: Run, example: Example):
        ...     # Row-level evaluator for helpfulness.
        ...     await asyncio.sleep(5)  # Replace with your LLM API call
        ...     return {"score": run.outputs["output"] == "Yes"}

        >>> results = asyncio.run(
        ...     aevaluate(
        ...         apredict,
        ...         data=dataset_name,
        ...         evaluators=[helpfulness],
        ...         summary_evaluators=[precision],
        ...         experiment_prefix="My Helpful Experiment",
        ...         description="Applying async evaluators example.",
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...


    .. versionchanged:: 0.2.0

        'max_concurrency' default updated from None (no limit on concurrency)
        to 0 (no concurrency at all).
    r/)r=rCrEr8r0zReceived invalid arguments. css�|]	\}}|r|VqdS�N�)�.0�k�vrLrL�H/usr/local/lib/python3.10/dist-packages/langsmith/evaluation/_arunner.py�	<genexpr>��zaevaluate.<locals>.<genexpr>z? should not be specified when target is an existing experiment.z,Running evaluation over existing experiment z...)r2r4r6r;r?rANz�Running a comparison of two existing experiments asynchronously is not currently supported. Please use the `evaluate()` method instead and make sure that your evaluators are defined as synchronous functions.zReceived unsupported arguments zC. These arguments are not supported when creating a new experiment.zDMust specify 'data' when running evaluations over a target function.zeExpected at most one of 'experiment' or 'experiment_prefix', but both were provided. Got: experiment=z, experiment_prefix=z&'upload_results' parameter is in beta.z&Running evaluation over target system )r0r2r4r6r8r:r;r=r?rArCrE)�
isinstance�str�uuid�UUIDr�
TracerSessionrB�any�values�tuple�items�
ValueError�id�logger�debug�aevaluate_existing�listr�
_aevaluate)rFr0r2r4r6r8r:r;r=r?rArCrErH�invalid_args�msg�	target_idrLrLrP�	aevaluateNs��@������
�
�����rfF�load_nested�,Union[str, uuid.UUID, schemas.TracerSession]c�s��|pt��}t|tj�r|n	t�t||�IdH}tjt|||d�IdH}	t�t	||�IdH��fdd�|	D�}
t
|	|
|||||||d�	IdHS)a�Evaluate existing experiment runs asynchronously.

    Args:
        experiment (Union[str, uuid.UUID]): The identifier of the experiment to evaluate.
        evaluators (Optional[Sequence[EVALUATOR_T]]): Optional sequence of evaluators to use for individual run evaluation.
        summary_evaluators (Optional[Sequence[SUMMARY_EVALUATOR_T]]): Optional sequence of evaluators
            to apply over the entire dataset.
        metadata (Optional[dict]): Optional metadata to include in the evaluation results.
        max_concurrency (int | None): The maximum number of concurrent
            evaluations to run. If None then no limit is set. If 0 then no concurrency.
            Defaults to 0.
        client (Optional[langsmith.Client]): Optional Langsmith client to use for evaluation.
        load_nested: Whether to load all child runs for the experiment.
            Default is to only load the top-level root runs.
        blocking (bool): Whether to block until evaluation is complete.

    Returns:
        AsyncIterator[ExperimentResultRow]: An async iterator over the experiment results.

    Examples:
        Define your evaluators

        >>> from typing import Sequence
        >>> from langsmith.schemas import Example, Run
        >>> def accuracy(run: Run, example: Example):
        ...     # Row-level evaluator for accuracy.
        ...     pred = run.outputs["output"]
        ...     expected = example.outputs["answer"]
        ...     return {"score": expected.lower() == pred.lower()}
        >>> def precision(runs: Sequence[Run], examples: Sequence[Example]):
        ...     # Experiment-level evaluator for precision.
        ...     # TP / (TP + FP)
        ...     predictions = [run.outputs["output"].lower() for run in runs]
        ...     expected = [example.outputs["answer"].lower() for example in examples]
        ...     # yes and no are the only possible answers
        ...     tp = sum([p == e for p, e in zip(predictions, expected) if p == "yes"])
        ...     fp = sum([p == "yes" and e == "no" for p, e in zip(predictions, expected)])
        ...     return {"score": tp / (tp + fp)}

        Load the experiment and run the evaluation.

        >>> from langsmith import aevaluate, aevaluate_existing
        >>> dataset_name = "Evaluate Examples"
        >>> async def apredict(inputs: dict) -> dict:
        ...     # This can be any async function or just an API call to your app.
        ...     await asyncio.sleep(0.1)
        ...     return {"output": "Yes"}
        >>> # First run inference on the dataset
        ... results = asyncio.run(
        ...     aevaluate(
        ...         apredict,
        ...         data=dataset_name,
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...

        Then evaluate the results
        >>> experiment_name = "My Experiment:64e6e91"  # Or manually specify
        >>> results = asyncio.run(
        ...     aevaluate_existing(
        ...         experiment_name,
        ...         evaluators=[accuracy],
        ...         summary_evaluators=[precision],
        ...     )
        ... )  # doctest: +ELLIPSIS
        View the evaluation results for experiment:...


    N)rgcsg|]}�|j�qSrL)�reference_example_id)rM�run��data_maprLrP�
<listcomp>�sz&aevaluate_existing.<locals>.<listcomp>)r0r2r4r6r;r?rArC)r�get_cached_clientrSrrW�
aitertools�
aio_to_threadr"r$r!rb)rCr2r4r6r;r?rgrA�project�runsr0rLrkrPr`Ts,�P
����r`�-Union[DATA_T, AsyncIterable[schemas.Example]]�FUnion[ATARGET_T, AsyncIterable[dict], Iterable[schemas.Run], Runnable]c

�s��t�|�pt|d�ot�|���pt|�}
|	pt��}	|
r dntt	t
j|�}t�
t|||	�IdH\}}t||	||p<||||t|�pHt|�dk|tt|��t|�dk|d�
��IdH}t�d�}|dury|��IdH}t�|�|�d�}nd}tj||	jgd��X|
r�|r�|jtt|�||d�IdH}n
|jtt|�|d�IdH}|r�|�|�IdH}n|r�|j||d�IdH}|r�|�|�IdH}t |�}|
r�|�!�IdH|Wd�S1s�wYdS)N�	__aiter__rr/)	r?r6rCr:r=rr�include_attachments�reuse_attachmentsrEz.yaml)�ignore_hosts�r;)"�asyncio�iscoroutinefunction�hasattr�iscoroutinerur �rtrnrr
r�Runrorpr'�_AsyncExperimentManagerrrr>�astart�ls_utils�
get_cache_dir�get_dataset_id�pathlib�Path�with_optional_cache�api_url� awith_predictions_and_evaluators�	ATARGET_T�awith_predictions�awith_summary_evaluators�awith_evaluatorsrJ�wait)rFr0r2r4r6r8r:r;r=r?rArCrE�is_async_targetrr�experiment_�manager�	cache_dir�dsid�
cache_path�resultsrLrLrPrb�s~�
���

�����
�
���$�rbcseZdZdZ												d`da�fdd�
Zdbd!d"�Zdcd$d%�Zddd'd(�Zded*d+�Zdfd-d.�Z	dgd/d0�Z
dbd1d2�Z	dhdid9d:�Z	dhdjd;d<�Z
dd=�dkd>d?�ZdldBdC�ZdmdEdF�ZdndHdI�Z		dodpdKdL�Z	dhdqdNdO�ZdrdTdU�ZdsdVdW�ZdtdXdY�Zdud[d\�Zdvd^d_�Z�ZS)wr�a�Manage the execution of experiments asynchronously.

    Supports lazily running predictions and evaluations in parallel to facilitate
    result streaming and early debugging.

    Args:
        data (DATA_T): The data used for the experiment. Can be a dataset name or ID OR
            a generator of examples.
        runs (Optional[Iterable[schemas.Run]]): The runs associated with the experiment
            predictions.
        experiment (Optional[schemas.TracerSession]): The tracer session
            associated with the experiment.
        experiment_prefix (Optional[str]): The prefix for the experiment name.
        description (Optional[str]): The description for the experiment.
        metadata (Optional[dict]): Additional metadata for the experiment.
        client (Optional[langsmith.Client]): The Langsmith client used for
             the experiment.
        evaluation_results (Optional[Iterable[EvaluationResults]]): The evaluation
            sresults for the experiment.
        summary_results (Optional[Iterable[EvaluationResults]]): The aggregate results
            for the experiment.
        num_repetitions (Optional[int], default=1): The number of repetitions for
            the experiment.
        include_attachments (Optional[bool], default=False): Whether to include
            attachments. This is used for when we pull the examples for the experiment.
        reuse_attachments (Optional[bool], default=False): Whether to reuse attachments
            from examples. This is True if we need to reuse attachments across multiple
            target/evaluator functions.
        upload_results (Optional[bool], default=True): Whether to upload results
            to Langsmith.
        attachment_raw_data_dict (Optional[dict]): A dictionary to store raw data
            for attachments. Only used if we reuse attachments across multiple
            target/evaluator functions.
    Nr/FTrC�+Optional[Union[schemas.TracerSession, str]]r6r7rr�BOptional[Union[Iterable[schemas.Run], AsyncIterable[schemas.Run]]]r?r@�evaluation_results�*Optional[AsyncIterable[EvaluationResults]]�summary_resultsr:r9r=r>rvrBrwrE�attachment_raw_data_dictr0rscsft�j||||d�||_d|_|durt�|�nd|_||_||_|	|_	|
|_
||_||_|
|_
dS)N)rCr6r?r:)�super�__init__�_data�	_examplesro�ensure_async_iterator�_runs�_evaluation_results�_summary_results�_num_repetitionsr�_reuse_attachments�_upload_results�_attachment_raw_data_dict)�selfr0rCr6rrr?r�r�r:r=rvrwrEr���	__class__rLrPr�/s"��
z _AsyncExperimentManager.__init__�example�schemas.ExamplerIcCs�t|d�r|js
|Si}|j��D]/\}}|jdur<t|j�||jvr<|dt�|jt|j�|�|dd�||<q|||<qtj	|j|j
|j|j|j
|j|j|j|j||j|jd�S)a�Reset attachment readers for an example.

        This is only in the case that an attachment is going to be used by more
        than 1 callable (target + evaluators). In that case we keep a single copy
        of the attachment data in self._attachment_raw_data_dict, and create
        readers from that data. This makes it so that we don't have to keep
        copies of the same data in memory, instead we can just create readers
        from the same data.
        �attachmentsN�
presigned_url�	mime_type�r��readerr��r]�
created_at�
dataset_id�inputs�outputsr6�modified_atrr�
source_run_idr��	_host_url�
_tenant_id)r|r�r[r�rTr]�io�BytesIOr�Exampler�r�r�r�r6r�rrr�r�r�)r�r��new_attachments�name�
attachmentrLrLrP�_reset_example_attachmentsSs6

��
�z2_AsyncExperimentManager._reset_example_attachments�AsyncIterator[schemas.Example]c�s���jdurMt�j�j�jd��_�jr-�jdur-t��j�\}�_dd�|2�IdH�_�j	dkrMdd��j2�IdH�t
��fdd�t�j	�D���_tjt��j�dt
��d	�\�_}|S)
N�r?rvc�sH�i|z3dHW}|jp
i��D]\}}t|j�||d���qq6S)Nr�)r�r[rTr]�read)rM�er��valuerLrLrP�
<dictcomp>�s���
�z9_AsyncExperimentManager.aget_examples.<locals>.<dictcomp>r/c�s�g|z3dHW}|�q6SrKrL�rMr�rLrLrPrm�rRz9_AsyncExperimentManager.aget_examples.<locals>.<listcomp>cs"g|]
}t�fdd��D���qS)csg|]}��|��qSrL)r�r��r�rLrPrm�s��zD_AsyncExperimentManager.aget_examples.<locals>.<listcomp>.<listcomp>)�async_iter_from_list)rM�_��
examples_listr�rLrPrm�s�
������lock)r��_aresolve_datar�r?rr�r�ro�ateer��async_chain_from_iterable�ranger�rz�Lock)r��
examples_copy�
examples_iterrLr�rP�
aget_examples�s.�
��
���z%_AsyncExperimentManager.aget_examplesrTc�sZ�|jdus
t|jdd�s't�|��IdH�IdH}|dur"td��t|j�St|jj�S)N�reference_dataset_idz!No examples found in the dataset.)	�_experiment�getattrro�py_anextr�r\rTr�r�)r�r�rLrLrPr��s��
z&_AsyncExperimentManager.get_dataset_id�AsyncIterator[schemas.Run]cCsT�|jdur
td��tjt�|j�dt��d�\|_}|2z	3dHW}|Vq6dS)NzRuns not loaded yet.r�r�)r�r\ror�r�rzr�)r�rrrjrLrLrP�	aget_runs�s�
��z!_AsyncExperimentManager.aget_runs� AsyncIterator[EvaluationResults]cCsx�|jdur|��IdH2z3dHW}dgiVq
6dStjt�|j�dt��d�\|_}|2z	3dHW}|Vq/6dS)Nr�r�r�)r�r�ror�r�rzr�)r�r�r��resultrLrLrP�aget_evaluation_results�s�
�
��z/_AsyncExperimentManager.aget_evaluation_resultsc�s��zt�|��IdH�IdH}Wntytd��w|s"td��|jr*|�|�nd}|�||�|j|j	d<|j
|��IdH||j	|j|j|j
|j|j|j|jd�
S)Nz\No examples found in the dataset. Please ensure the data provided to aevaluate is not empty.z[No examples found in the dataset.Please ensure the data provided to aevaluate is not empty.r=)	rCr6r?rrr�rvrwrEr�)ror�r��StopAsyncIterationr\r��_get_project�_print_experiment_startr��	_metadatar�r?r�r�rr�r�)r��
first_examplerqrLrLrPr��s6�����z_AsyncExperimentManager.astartcCs�i}|jpi��D]1\}}|jdur6t|j�||jvr6t�|jt|j�|�}|d||dd�||<q	|||<q	tj|j|j	|j
|j|j|j
|j|j|j||j|jd�S)Nr�r�r�r�)r�r[r�rTr]r�r�rr�r�r�r�r�r6r�rrr�r�r�)r�r�r�r�r�r�rLrLrP�_get_example_with_readers�s4
��
�z1_AsyncExperimentManager._get_example_with_readersr;r<rFr�r2�*Sequence[Union[EVALUATOR_T, AEVALUATOR_T]]c	�s��t���t�d�stjdd��_����fdd�}tj�|�dd�}tj|dt�	�d	�\}}}t
d
d�|2��j�j�j
dd�|2�d
d�|2��j�j�jd�	S)z�Run predictions and evaluations in a single pipeline.

        This allows evaluators to process results as soon as they're available from
        the target function, rather than waiting for all predictions to complete first.
        �_evaluator_executor���max_workerscs`��j��t��d�2z!3dHW}|d|d}}�j�||dgid��jd�}|Vq6dS)z�Create a single task per example.

            That task is to run the target function and all the evaluators
            sequentially.
            �r;rvNr�rjr��rjr�r���executor)�	_apredictr�_arun_evaluatorsr�)�predr�rjr��r2r;r�rFrLrP�process_exampless"����	�zR_AsyncExperimentManager.awith_predictions_and_evaluators.<locals>.process_examples���MbP?��_eager_consumption_timeout�r�cS�"�|z3dHW}|dVq6dS�Nr�rL�rMr�rLrLrPrQ3�� zK_AsyncExperimentManager.awith_predictions_and_evaluators.<locals>.<genexpr>cSr��NrjrLr�rLrLrPrQ7r�cSr��Nr�rLr�rLrLrPrQ8r��rCr6r?rrr�r�rvrE)r&r|�cf�ThreadPoolExecutorr�ro�aiter_with_concurrencyr�rzr�r�r�r�r?r�rr�)	r�rFr2r;r��experiment_results�r1�r2�r3rLr�rPr��s,�
��z8_AsyncExperimentManager.awith_predictions_and_evaluatorsc	�sb�|j||t|�d�}tj|dt��d�\}}tdd�|2�|j|j|j	dd�|2�|j|j
d�S)Nr�r�r�cSr�r�rL�rMr�rLrLrPrQKr�z<_AsyncExperimentManager.awith_predictions.<locals>.<genexpr>cSr�r�rLrrLrLrPrQOr�)rCr6r?rrrvrE)r�rror�rzr�r�r�r�r?r�)r�rFr;�_experiment_resultsrrrLrLrPr�>s ���z)_AsyncExperimentManager.awith_predictionsryc�sv�t|�}|j||d�}tj|dt��d�\}}}tdd�|2�|j|j|j	dd�|2�dd�|2�|j
|j|jd�	S)	Nryr�r�cSr�r�rLr�rLrLrPrQ^r�z;_AsyncExperimentManager.awith_evaluators.<locals>.<genexpr>cSr�r�rLr�rLrLrPrQbr�cSr�r�rLr�rLrLrPrQcr�r)
r&�_ascoreror�rzr�r�r�r�r?r�rr�)r�r2r;rrrrrLrLrPr�Ts��z(_AsyncExperimentManager.awith_evaluatorsr4�Sequence[SUMMARY_EVALUATOR_T]c�sH�t|�}|�|�}t|��IdH|j|j|j|��|j||j	|j
d�	S)Nr)r)�_aapply_summary_evaluatorsr�r�r�r�r?r�r�rr�)r�r4�wrapped_evaluators�aggregate_feedback_genrLrLrPr�is�
�z0_AsyncExperimentManager.awith_summary_evaluators�"AsyncIterator[ExperimentResultRow]cCsL�t�|��|��IdH|���2z3dHW\}}}t|||d�Vq6dS)Nr�)ro�	async_zipr�r�r�r)r�rjr�r�rLrLrP�aget_results{s��
��z$_AsyncExperimentManager.aget_results�Dict[str, List[dict]]c�s.�|jdur
dgiSddd�|j2�IdHiS)Nr�c�s*�g|z3dHW}|dD]}|�q
q6S)Nr�rL)rMr��resrLrLrPrm�s���
�z?_AsyncExperimentManager.aget_summary_scores.<locals>.<listcomp>)r�r�rLrLrP�aget_summary_scores�s�

��z+_AsyncExperimentManager.aget_summary_scores�AsyncIterator[_ForwardResults]csT�t|�����fdd�}tj||�dd�2z	3dHW}|Vq6���IdHdS)NcsD����IdH2z3dHW}t���|��j�j�j��Vq6dSrK)r��	_aforwardr��experiment_namer�r?)r���fnrvr�rLrP�predict_all�s���z6_AsyncExperimentManager._apredict.<locals>.predict_allr�r�)�_ensure_async_traceableror�_aend)r�rFr;rvrr�rLrrPr��s���z!_AsyncExperimentManager._apredict�Sequence[RunEvaluator]csl�tjdd��%����fdd�}tj||�dd�2z	3dHW}|Vq6Wd�dS1s/wYdS)Nr�r�cs0����2z3dHW}�j�|�d�Vq6dS)Nr�)rr�)�current_results�r2r�r�rLrP�	score_all�s�
��z2_AsyncExperimentManager._ascore.<locals>.score_allr�r�)rrror)r�r2r;r r�rLrrPr
�s���"�z_AsyncExperimentManager._ascorerrr��cf.ThreadPoolExecutorc	�s��t��}i|dpi�d�ji�}tjdii|�d|�js dnd�jd�����E|d�|d�|d	}����fd
d�}g}|D]}	|�||	�IdH�qC|D]
}
|
dur_|d�|
�qRt��|d
�Wd�S1sqwYdS)Nr6rCr2�localT��project_namer6�enabledr?rjr�r�c
�s&�z"|j�����d�IdH}�j�|�}�jr!�jj|��d�|WSty��zcz,t|�}t��fdd�|D�d�}�j�|�}�jrO�jj|��d�|WWYd��Styq}zt	�
d|���WYd}~nd}~wwt	jdt|��d�j
�d	t����d
d�WYd��dSd��ww)N�rjr�)rj�	_executorcs&g|]}t|�jt��ddid��qS)�errorT)�keyr��comment�extra)r+r]�repr)rMr))r�rjrLrPrm�s���z[_AsyncExperimentManager._arun_evaluators.<locals>._run_single_evaluator.<locals>.<listcomp>)r�zError parsing feedback keys: zError running evaluator z on run �: T��exc_info)�
aevaluate_runr�r?�_select_eval_resultsr��_log_evaluation_feedback�	Exceptionrr,r^r_r(r,r])�	evaluator�evaluator_response�selected_results�
feedback_keys�error_response�e2�r�r�rjr�)r�rP�_run_single_evaluator�sZ���������������zG_AsyncExperimentManager._arun_evaluators.<locals>._run_single_evaluatorr�r�rL)	�rh�get_tracing_contextr�tracing_contextr�r?�append�extendr)r�r2rr��current_contextr6�eval_resultsr;�all_resultsr4r�rLr:rPr��sD�
�����	/��$�z(_AsyncExperimentManager._arun_evaluatorscCs��gg}}t�|��IdH�}t�|��|�2z3dHW\}}|�|�|�|�q6g}|jr6|��jnd}t	�
�}	i|	dpBi�|j|d��}
t	jdii|	�d|
|jsXdnd|j
d�����l|D]a}z>|||�}|j
j||jd�}
|�|
�|jr�|
D]#}|jdhd	�}|�d
d�}tj|j
jfi|�d||d���IdHqWqdty�}ztjdt|��d
|��dd�WYd}~qdd}~wwWd�n1s�wYd|iVdS)Nr6)rC�
experiment_idr2r"Tr#)�fn_name�
target_run_id)�exclude�evaluator_info)�run_id�
project_id�source_infoz Error running summary evaluator r-r.r�rL)ror�r�rr�r?r��_get_experimentr]r<r=rr>r?r1�__name__r@�dict�poprp�create_feedbackr3r^r(r,)r�r4rr�examples�async_examplesrjr��aggregate_feedbackrJrAr6r4�summary_eval_result�flattened_resultsr��feedbackrHr�rLrLrPrsz�
�
�
������	
�
���������!z2_AsyncExperimentManager._aapply_summary_evaluatorsc�sV�g}|��IdH2z3dHW}|jr|�|j�q
6|r!t|�nd}|r)|��SdSrK)r�r�r?�max�	isoformat)r�r�r��max_modified_atrLrLrP�_get_dataset_versionLs���z,_AsyncExperimentManager._get_dataset_version�Optional[list[str]]c�s��t�}|��IdH2z/3dHW}|jr5|j�d�r5t|jdt�r5|jdD]}t|t�r3|�|�q'q|�d�q6t|�S)N�
dataset_split�base)�setr�r6�getrSrarT�add)r��splitsr��splitrLrLrP�_get_dataset_splitsWs"��
��

���z+_AsyncExperimentManager._get_dataset_splits�Nonec�sn�|jsdS|j}|durtd��|��}|��IdH|d<|��IdH|d<|jj|ji|j	�|�d�dS)NzExperiment not started yet.�dataset_version�dataset_splits)r6)
r�r�r\�_get_experiment_metadatarZrcr?�update_projectr]r6)r�rC�project_metadatarLrLrPrgs"���
�z_AsyncExperimentManager._aend)NNNNNNNr/FFTN)rCr�r6r7rrr�r?r@r�r�r�r�r:r9r=r>rvrBrwrBrErBr�r7r0rs)r�r�rIr�)rIr��rIrT)rIr�)rIr�)rIr�rK)r;r<rFr�r2r�rIr�)r;r<rFr�rIr�)r2r�r;r<rIr�)r4rrIr��rIr)rIr)NF)r;r<rvrBrFr�rIr)r2rr;r<rIr)r2rrrr�r!rIr)r4rrIr�)rIr9)rIr[�rIrd)rM�
__module__�__qualname__�__doc__r�r�r�r�r�r�r�r�r�r�r�r�rrr�r
r�rrZrcr�
__classcell__rLrLr�rPr�sR'�
$
-
!


	

&�C��



��

T
7
r�c@sxeZdZd$dd�Zed%dd��Zd&d
d�Zd'd
d�Zd(dd�Z	d)d*dd�Z	d%dd�Z
d+dd�Zd%d d!�Zd,d"d#�Z
dS)-rJ�experiment_managerr�cCs4||_g|_t��|_t�|�|j��|_d|_dS)Nr)	�_manager�_resultsrzr��_lock�create_task�
_process_data�_task�_processed_count)r�rqrLrLrPr�{s


zAsyncExperimentResults.__init__rIrTcCs|jjSrK)rrrr�rLrLrPr��z&AsyncExperimentResults.experiment_namercCs|SrKrLr�rLrLrPru�sz AsyncExperimentResults.__aiter__rc	�s��d
�fdd�}	�j4IdH�3�jt�j�kr2�j�j}�jd7_|Wd�IdHS�j��r9t�Wd�IdHn1IdHsIwYt�tj	|t�j��dd	��IdHq	)N�indexr>rIrdc�s.��j|krt�d�IdH�j|ksdSdS)Ng�������?)rxrz�sleep)rzr�rLrP�_wait_until_index�s�
�z;AsyncExperimentResults.__anext__.<locals>._wait_until_indexTr/)�timeout)rzr>rIrd)
rtrx�lenrsrw�doner�rz�shield�wait_for)r�r|r�rLr�rP�	__anext__�s ��
(�
��z AsyncExperimentResults.__anext__r�rdc
�s��t�}||���2z)3dHW}|j4IdH�|j�|�Wd�IdHn1IdHs.wYq
6|��IdH}|j4IdH�||_Wd�IdHdS1IdHsXwYdSrK)r#rrtrsr?rr�)r�r��tqdm�item�summary_scoresrLrLrPrv�s�(���.�z$AsyncExperimentResults._process_datarN�startr<�end�	DataFramecCst|j||d�S)N)r�r�)r(rs)r�r�r�rLrLrP�	to_pandas�sz AsyncExperimentResults.to_pandascCs6ddl}|jr|j�d�r|�dd�}|��S|��S)Nr�pandas�)�importlib.utilrs�util�	find_specr��_repr_html_�__repr__)r��	importlib�dfrLrLrPr��s
z"AsyncExperimentResults._repr_html_r>cCs
t|j�SrK)r~rsr�rLrLrP�__len__�s
zAsyncExperimentResults.__len__cCsd|j�d�S)Nz<AsyncExperimentResults �>)rr�rLrLrPr��szAsyncExperimentResults.__repr__c�s�|jIdHdSrK)rwr�rLrLrPr��s�zAsyncExperimentResults.wait)rqr�rjrk)rIr)r�r�rIrd)rN)r�r<r�r<rIr�)rIr>rl)rMrmrnr��propertyrrur�rvr�r�r�r�r�rLrLrLrPrJzs





�

	
r�,rh.SupportsLangsmithExtra[[dict], Awaitable]r�r�rrTrN�langsmith.Clientrvrc	�s��d�d�fdd�}tjdd��bz2|r|j|jfn|jf}||d	tj|j||i|�d
|jr3|j��n|j��i�|d�i�IdHWnt	y`}zt
jd|��dd
d�WYd}~nd}~wwtt
tj��|d�Wd�S1suwYdS)N�r�run_trees.RunTreerIrdcs|�dSrKrL)r��rjrLrP�_get_run�ryz_aforward.<locals>._get_runT)r%�langsmith_extra�example_version)ri�on_endr$r6r?zError running target function: r/)r/�
stacklevelr&)r�r�rIrd)r<r>r�r��LangSmithExtrar]r�rXr�r3r^r(rrrr)	rr�rr6r?rvr��argsr�rLr�rPr�sL������������
�$�rr�cCsVt�|�st|�st|�rtd��td��t�|�r|St|�r#|j}tjdd�|�S)Nz�Target must be an async function. For sync functions, use evaluate. Example usage:

async def predict(inputs: dict) -> dict:
    # do work, like chain.invoke(inputs)
    return {...}
await aevaluate(predict, ...)z�Target must be a callable async function. Received a non-callable object. Example usage:

async def predict(inputs: dict) -> dict:
    # do work, like chain.invoke(inputs)
    return {...}
await aevaluate(predict, ...)�AsyncTarget)r�)	rzr{r �callabler\r<�is_traceable_function�ainvoke�	traceable)rFrLrLrPr�s�	�
r)rvr�cCs(t|t�r
t�|�St�t|||d��S)z*Return the examples for the given dataset.r�)rSrror�r%)r0r?rvrLrLrPr�
s


�r��T�iterable�Iterable[AsyncIterable[T]]�AsyncIterator[T]cCs*�|D]}|2z	3dHW}|Vq6qdS)zChain multiple async iterables.NrL)r��sub_iterabler�rLrLrPr�s���r�rQ�List[schemas.Example]�AsyncIterable[schemas.Example]cCs�|D]}|VqdS)z0Convert a list of examples to an async iterable.NrL)rQr�rLrLrPr�'s��r�)NNNNNNrr/NTNT)r0r1r2r3r4r5r6r7r8r9r:r9r;r<r=r>r?r@rArBrCrDrErBrFrGrHrrIrJ)NNNrNFT)r2r3r4r5r6r7r;r<r?r@rgrBrArBrCrhrIrJ)NNNNNNr/NTNT)r0rsr2r3r4r5r6r7r8r9r:r9r;r<r=r>r?r@rArBrCrDrErBrFrtrIrJ)F)rr�r�r�rrTr6rNr?r�rvrBrIr)rFr�rIr�)r0rsr?r�rvrBrIr�)r�r�rIr�)rQr�rIr�)Uro�
__future__rrz�concurrent.futures�futuresrr��loggingr�rU�typingrrrrrrr	r
rrr
rrr�	langsmithrr<rrr~rr��langsmith._internalrro�#langsmith._internal._beta_decoratorr�langsmith.evaluation._runnerrrrrrrrrrr r!r"r#r$r%r&r'r(r)�langsmith.evaluation.evaluatorr*r+r,r-r��pd�langchain_core.runnablesr.r��	getLoggerrMr^rNr�rfr`rbr�rJrrr�r�r�r�rLrLrLrP�<module>s�@T
$���l�OsK�
0"�