xrag.eval package#
Submodules#
xrag.eval.DeepEvalLocalModel module#
- class xrag.eval.DeepEvalLocalModel.DeepEvalLocalModel(model, tokenizer)[source]#
Bases:
DeepEvalBaseLLM
- async a_generate(prompt)[source]#
Runs the model to output LLM response.
- Return type:
str
- Returns:
A string.
xrag.eval.EvalModelAgent module#
xrag.eval.evaluate_LLM module#
- xrag.eval.evaluate_LLM.UptrainEvaluate(evalModelAgent, question, actual_response, retrieval_context, expected_answer, gold_context, checks, local_model='qwen:7b-chat-v1.5-q8_0')[source]#
- xrag.eval.evaluate_LLM.evaluating(question, response, actual_response, retrieval_context, retrieval_ids, expected_answer, golden_context, golden_context_ids, metrics, evalModelAgent)[source]#
- xrag.eval.evaluate_LLM.get_DeepEval_Metrices(evalModelAgent, model_name='DeepEval_retrieval_contextualPrecision')[source]#
xrag.eval.evaluate_TGT module#
xrag.eval.evaluate_TRT module#
xrag.eval.evaluate_rag module#
- xrag.eval.evaluate_rag.NLGEvaluate(questions, actual_responses, expect_answers, golden_context_ids, metrics)[source]#
- xrag.eval.evaluate_rag.UptrainEvaluate(evalModelAgent, question, actual_response, retrieval_context, expected_answer, gold_context, checks, local_model='qwen:7b-chat-v1.5-q8_0')[source]#
- xrag.eval.evaluate_rag.evaluating(question, response, actual_response, retrieval_context, retrieval_ids, expected_answer, golden_context, golden_context_ids, metrics, evalModelAgent)[source]#
- xrag.eval.evaluate_rag.get_DeepEval_Metrices(evalModelAgent, model_name='DeepEval_retrieval_contextualPrecision')[source]#