OpenAI 支持函数调用(Function Calling)。 通过这个功能,开发者能进一步拓展GPT的能力,比如联网获取实时信息,与第三方应用互动等。
- OpenAI的function call功能,相当于开放了自定义插件的接口
- 通过接入外部工具极大的改善了模型幻觉(一本正经的胡说八道)问题
- 一定程度上缓解了数据安全问题,私有数据可以尽量自行处理。
流程和原理
这个function call功能流程如下(这里以调用python为例,实际上可以是任何语言或者api):
- User->ChatGPT. 你需要提供给ChatGPT一些函数,每个函数要写清楚函数的名称(name), 函数的作用(description)和参数(parameters)。并且问ChatGPT一个问题。
- ChatGPT->User. ChatGPT会判断需不需要调用你提供的函数。如果判断你提供的函数可以解决你的问题,会将你的问题转化为设置好参数的函数。在对话系统的小模型时代,相当于做了个NLU意图分类+text2code。
- User->ChatGPT. 你根据ChatGPT的给出的函数,自己运行函数,然后把函数运行的结果返回给ChatGPT。
- ChatGPT->User. ChatGPT根据之前的对话信息和你给的结果,最终将问题会打给你。
看到这里你就会发现整个流程其实非常像一个only-one-job的HuggingGPT,只不过HuggingGPT是调用模型,这里是调用函数。
如果你自己搞一个函数的pipeline,其实就是langchain了。只不过langchain的结构是在函数里与ChatGPT对话来完成任务,而function call是在对话里调用函数,从体感上来说,还是function call更加丝滑一些,langchain哭晕在厕所。
如果再稍微设计一下prompt,让ChatGPT来自己决策完成任务需要使用哪些函数,其实就跟AutoGPT差不多了。
代码示例
1、首先需要拿到openai的key和url,
项目github地址:https://github.com/xing61/xiaoyi-robot
-
第1步:用手机号登录智增增,获取复制出key和url,地址:https://gpt.zhizengzeng.com/#/login
-
第2步:编写代码。注意配置的base_url是:
https://flag.smarttrot.com/v1
2、开始撸python代码:(其它语言类似)
API_SECRET_KEY = "xxxx"; #你在智增增获取的key
BASE_URL = "https://flag.smarttrot.com/v1" #智增增的base_url
openai.api_key = API_SECRET_KEY
openai.api_base = BASE_URL
# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
weather_info = {
"location": location,
"temperature": "72",
"unit": unit,
"forecast": ["sunny", "windy"],
}
return json.dumps(weather_info)
def run_conversation():
# Step 1: send the conversation and available functions to GPT
messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=functions,
# function_call="auto", # auto is default, but we'll be explicit
function_call={"name": "get_current_weather"}, # auto is default, but we'll be explicit
)
response_message = response["choices"][0]["message"]
# Step 2: check if GPT wanted to call a function
if response_message.get("function_call"):
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
available_functions = {
"get_current_weather": get_current_weather,
} # only one function in this example, but you can have multiple
function_name = response_message["function_call"]["name"]
function_to_call = available_functions[function_name]
function_args = json.loads(response_message["function_call"]["arguments"])
function_response = function_to_call(
location=function_args.get("location"),
unit=function_args.get("unit"),
)
# Step 4: send the info on the function call and function response to GPT
response_message["content"]="Testing"
messages.append(response_message) # extend conversation with assistant's reply
messages.append(
{
"role": "function",
"name": function_name,
"content": function_response,
}
) # extend conversation with function response
second_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
) # get a new response from GPT where it can see the function response
return second_response
#print(run_conversation())
if __name__ == '__main__':
run_conversation();