langchain学习笔记(八)

发布于:2024-03-02 ⋅ 阅读:(124) ⋅ 点赞:(0)

RunnableLambda: Run Custom Functions | 🦜️🔗 Langchain

可以在pipeline中使用任意函数,但要注意所有的输入都只能是“1”个参数,当函数需要多个参数时需要采用字典来包装

itemgetter用法见langchain学习笔记(六)_runnablepassthrough-CSDN博客
from operator import itemgetter
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda
from langchain_community.chat_models import ChatOllama
def single_arg(arg):#单个参数
    return arg
def _multiple_arg(arg1, arg2):
    return arg1*arg2
def multiple_arg(_dict):#多个参数需要通过字典包装,逻辑在调用的函数内实现
    return _multiple_arg(_dict["arg1"], _dict["arg2"])
prompt = ChatPromptTemplate.from_template("what is {a} + {b}")
model = ChatOllama(model="qwen:0.5b-chat", temperature=0)
chain1 = prompt | model
chain = (
    {
        "a": itemgetter("arg1") | RunnableLambda(single_arg),
        "b": {"arg1": itemgetter("arg1"), "arg2": itemgetter("arg2")}| RunnableLambda(multiple_arg),
    }
    | prompt
    | model
)
print(chain.invoke({'arg1':1,'arg2':2}))

 

RunnableConfig
用于将回调、标记和其他配置信息传递嵌套运行。

from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableConfig,RunnableLambda
import json
def parse_or_fix(text: str, config: RunnableConfig):
    fixing_chain = (
        ChatPromptTemplate.from_template(
            "Fix the following text:\n\n```text\n{input}\n```\nError: {error}"
            " Don't narrate, just respond with the fixed data."
        )
        | ChatOpenAI()
        | StrOutputParser()
    )
    for _ in range(3):
        try:
            return json.loads(text)
        except Exception as e:
            text = fixing_chain.invoke({"input": text, "error": e}, config)
    return "Failed to parse"

from langchain.callbacks import get_openai_callback
with get_openai_callback() as cb:
    output = RunnableLambda(parse_or_fix).invoke(
        "{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]}
    )
    print(output)
    print(cb)

本文含有隐藏内容,请 开通VIP 后查看

网站公告

今日签到

点亮在社区的每一天
去签到