Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update0728 #110

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
China won 103 gold medals in the 2023 Chengdu Universiade.
108 changes: 49 additions & 59 deletions langchain/jupyter/autogpt/autogpt.ipynb

Large diffs are not rendered by default.

47 changes: 32 additions & 15 deletions langchain/langsmith/evaluation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,27 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "4913e104-82e6-4932-8e80-2b8bd57553c3",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Name: langsmith\n",
"Version: 0.1.85\n",
"Summary: Client library to connect to the LangSmith LLM Tracing and Evaluation Platform.\n",
"Home-page: https://smith.langchain.com/\n",
"Author: LangChain\n",
"Author-email: [email protected]\n",
"License: MIT\n",
"Location: c:\\users\\lenovo\\appdata\\roaming\\python\\python310\\site-packages\n",
"Requires: orjson, pydantic, requests\n",
"Required-by: langchain, langchain-community, langchain-core\n"
]
}
],
"source": [
"!pip show langsmith"
]
Expand All @@ -39,7 +56,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "4cb8b089-8d3c-4f56-b5d3-2929dcb49c26",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -90,7 +107,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "7b54f22e-17ae-41f2-a137-76e84fef9b49",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -135,7 +152,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "0559ea2a-082d-4836-92cd-7473711ee79a",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -167,24 +184,24 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "eeec0c29-5e85-46e1-915b-619b68627d63",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/ubuntu/miniconda3/envs/langchain/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
"C:\\ProgramData\\anaconda3\\envs\\langchain\\lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"View the evaluation results for experiment: 'Toxic Queries-465b0ea2' at:\n",
"https://smith.langchain.com/o/3d35c1a5-b729-4d18-b06d-db0f06a30bc1/datasets/e1df55ff-b66c-4bcf-b5fd-7c63a847136e/compare?selectedSessions=2900c5b7-9dd5-482a-ab79-32888be3d5b9\n",
"View the evaluation results for experiment: 'Toxic Queries-dca23089' at:\n",
"https://smith.langchain.com/o/e310e593-5efd-5b23-825d-2441d452dd41/datasets/4ffa139a-d67c-4bdd-a27c-82700361810b/compare?selectedSessions=fc0f1969-a7af-4c53-9e80-f5aebe2ebb30\n",
"\n",
"\n"
]
Expand All @@ -193,7 +210,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"6it [00:01, 4.71it/s]\n"
"6it [00:05, 1.12it/s]\n"
]
}
],
Expand Down Expand Up @@ -242,7 +259,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "46817304-1e17-4ca1-a5ba-faebd80c3728",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -275,7 +292,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "096e3129-8e5e-42b9-8c42-d59f072f20c5",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -332,17 +349,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "431bbdb3-d4a3-445a-9cfc-2e62adff3ad0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"To build a Retrieval-Augmented Generation (RAG) chain in LCEL, you would need to compose a chain that includes a retriever component to fetch relevant documents or data based on a query, and then pass that retrieved data to a generator model to produce a final output. In LCEL, this would typically involve using `Retriever` and `Generator` components, which you can easily piece together thanks to LCEL's composable nature.\\n\\nThe following example is a simplified step-by-step guide to building a bas\""
"\"To build a Retrieve-and-Generate (RAG) chain in LangChain Expression Language (LCEL), you would typically structure your chain to include a retrieval step from some data source or knowledge base, followed by a generation step using a language model to synthesize responses based on the retrieved information. Although the documentation excerpts provided do not explicitly detail a RAG example, here's a conceptual outline on how you might construct a RAG chain in LCEL based on the principles of chai\""
]
},
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
Expand Down
38 changes: 19 additions & 19 deletions langchain/langsmith/tracing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 1,
"id": "5956cb98-b567-4a2f-9ead-1b14742e8bc7",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -81,14 +81,14 @@
"@traceable\n",
"def run_pipeline(prompt):\n",
" # 运行整个管道流程\n",
" messages = for mat_prompt(prompt) # 创建提示信息\n",
" messages = format_prompt(prompt) # 创建提示信息\n",
" response = invoke_llm(messages) # 调用模型\n",
" return parse_output(response) # 解析模型输出"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 2,
"id": "834711de-9f4a-40e5-90ef-d4fd16c2faff",
"metadata": {},
"outputs": [
Expand All @@ -98,7 +98,7 @@
"'为一家卖烤鸭的店取名字时,可以考虑以下一些建议:\\n\\n1. 金陵烤鸭坊\\n2. 鸭香阁\\n3. 鸭乐园\\n4. 鸭舫\\n5. 鸭悦坊\\n6. 鸭乐食府\\n7. 鸭香居\\n8. 鸭乐轩\\n9. 鸭乐园\\n10. 鸭乐坊\\n\\n希望这些建议能够帮助你取一个好听且有吸引力的店名!'"
]
},
"execution_count": 7,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -109,17 +109,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 3,
"id": "c2a5528b-9051-4b28-ac04-90579471650f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'为一家卖驴肉火烧的店取名可以考虑以下几个方向:\\n1. 与驴肉相关的名字:比如“驴肉香坊”、“驴肉乡村”等。\\n2. 引人入胃的名字:比如“香味驴肉坊”、“美味驴肉馆”等。\\n3. 独特创意的名字:比如“驴肉烧烤屋”、“驴肉烧的香”等。\\n\\n希望以上建议能够帮助你取一个好听的店名!'"
"'为一家卖驴肉火烧的店取名可以考虑以下几个方向:\\n\\n1. 传统风格:比如“骡香阁”、“驴肉馆”等,突出传统风味。\\n\\n2. 创意取名:比如“驴肉一口香”、“驴肉烧之家”等,突出独特创意。\\n\\n3. 地域特色:比如“驴肉火烧坊”、“驴肉香乡”等,突出地域特色。\\n\\n4. 品牌定位:根据店铺的定位和目标顾客群,取名可以突出品牌形象,比如“驴肉鲜香坊”、“驴肉火烧小馆”等。\\n\\n希望以上建议能够帮助你取一个适合的店名!'"
]
},
"execution_count": 8,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
Expand Down Expand Up @@ -150,7 +150,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 4,
"id": "b3c43988-c21a-4de7-95a6-5c7d8c877d7e",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -191,17 +191,17 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 5,
"id": "f68fd0b1-42cf-41d6-80db-34ae698938c3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'当然!在今天早上的会议中,我们主要讨论了出海创业的机会与挑战。'"
"'当然,今天早上的会议主要讨论了出海创业的机会与挑战。如果需要更详细的总结或其他信息,请告诉我!'"
]
},
"execution_count": 24,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
Expand Down Expand Up @@ -243,7 +243,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 6,
"id": "b20cc85a-b244-46bb-9f60-ef1bc7dd6596",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -326,7 +326,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 7,
"id": "21255c23-f9e4-4c96-b729-475cd8ae62f5",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -390,7 +390,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 8,
"id": "99f85113-0fe2-41c9-80e0-905fbf6d1b22",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -423,15 +423,15 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 9,
"id": "c6661f6a-8d54-4fe4-96f8-8f0d046a719b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"这幅图是一种幽默搞笑的对比图。左侧展示的是一只形如肌肉男的柴犬,被称为“16岁的我”,右侧则是一只普通的柴犬,被称为“工作后的我”。图片通过夸张的肌肉和普通的狗的形态来幽默地表达了人们对比自己年轻时充满活力和成年后工作压力导致身体和精神状态“变形”的感受。左边的大肌肉柴犬下方的文字翻译为“我可以一口气做一百个俯卧撑,一条跑足十公里,浴火重生的女人,人见人爱的大男孩”,而右边的普通柴犬下方的文字翻译为“好累啊 好想赖床 浑身疼痛 我没有病 你心有病 我命由我不由天 独步天下”。这些标签富含讽刺和幽默意味,反映了现代生活中劳累与压力的普遍现象。\n"
"这幅图是一种幽默搞笑的图片,通过夸张的形式对比了两种不同的生活方式的后果。左边的狗被描绘成一位肌肉发达的人型体格,代表了经常健身的生活方式,配文是“16岁的我工作后的我”,意味着这是年轻时候通过努力锻炼得到的结果。右边的狗看起来比较普通甚至有些发福,表示不经常运动、较懒惰的生活状态,与左边形成对比。整个图片表达了生活方式选择对体型和健康的影响的幽默观点。\n"
]
}
],
Expand Down Expand Up @@ -467,7 +467,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 10,
"id": "cf7f045a-5a65-4c3d-aa45-e415a38d62ab",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -496,7 +496,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 11,
"id": "dcb18175-b9ef-40d4-b455-52be5f66028c",
"metadata": {},
"outputs": [
Expand All @@ -506,7 +506,7 @@
"'孙悟空一共打过四次白骨精。'"
]
},
"execution_count": 22,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
# 旧代码
# from langchain.chains import LLMChain

# 新代码
from langchain.globals import set_verbose
from langchain.globals import set_debug

set_verbose(False)

from utils import LOG
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate
Expand All @@ -26,16 +33,29 @@ def __init__(self, model_name: str = "gpt-3.5-turbo", verbose: bool = True):
# 为了翻译结果的稳定性,将 temperature 设置为 0
chat = ChatOpenAI(model_name=model_name, temperature=0, verbose=verbose)

self.chain = LLMChain(llm=chat, prompt=chat_prompt_template, verbose=verbose)
# 旧代码
# self.chain = LLMChain(llm=chat, prompt=chat_prompt_template, verbose=verbose)

# 新代码
set_debug(verbose)
self.chain = chat_prompt_template | chat

def run(self, text: str, source_language: str, target_language: str) -> (str, bool):
result = ""
try:
result = self.chain.run({
# 新代码
result = self.chain.invoke({
"text": text,
"source_language": source_language,
"target_language": target_language,
})
}).content
# 旧代码
# result = self.chain.run({
# "text": text,
# "source_language": source_language,
# "target_language": target_language,
# })

except Exception as e:
LOG.error(f"An error occurred during translation: {e}")
return result, False
Expand Down
Loading