Developing AI Agents with Python (Part Four)

The previous article introduced how to define prompt templates within the Langchain framework.

This article will introduce a tool in the LangChain ecosystem called LangServe.

1. What is LangServe?

LangServe is a tool in the LangChain ecosystem designed to help developers quickly deploy chains, agents, or other runnable objects developed with LangChain as REST API services. It is built on FastAPI, simplifying the process of converting AI applications into standardized interfaces for easier integration with other systems.

Core Features and Characteristics

  1. Rapid Deployment Convert LangChain objects (such as chains, retrievers, agents) into API endpoints with minimal code, without the need to manually write routing or serialization logic.

  2. Automatic API Documentation Integrates FastAPI’s Swagger UI to automatically generate interactive API documentation, allowing developers to test interfaces directly.

  3. Asynchronous Support Natively supports asynchronous request handling, suitable for high-concurrency scenarios, enhancing the efficiency of inference tasks.

  4. Integration with LangChain.js The generated API can be directly called by LangChain.js, facilitating collaborative front-end and back-end development of AI applications.

  5. Simplified Development Process Developers can focus on business logic (such as chain design) without dealing with low-level details like HTTP request parsing and response formatting.

2. Writing Code1. Large Model Invocation This part of the code was detailed in the previous article, so it will not be elaborated here.

# Call AI detection platform (langSmith)
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = 'XXXX'
os.environ["LANGCHAIN_PROJECT"] = "Zhipu AI"

# Call Zhipu AI API
os.environ["ZHIPUAI_API_KEY"] = "XXXXX"

# Install third-party library integration
# pip install langchain_community
# Call third-party library
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

# Call large language model
model = ChatZhipuAI(model_name='glm-4-flash')

# Create a data parser for the return
parser = StrOutputParser()

# Define prompt template
prompt_template = ChatPromptTemplate.from_messages([
      ('system','Please translate the following content into {language}'),
      ('user',"{text}")])

# Get chain
chain = prompt_template | model | parser

# Call chain
print(chain.invoke({'language':'English','text':'I am very happy today because I had a hamburger for lunch.'}))

2. Install the LangServe Third-Party Library

pip install langserve[all]

3. Install the FastAPI Third-Party Library

Since the LangServe tool is built on FastAPI, we need to install FastAPI first.

pip install fastapi

4. Call Third-Party Libraries

from langserve import add_routes
from fastapi import FastAPI

5. Create FastAPI Application

app = FastAPI(title='My Langchain Service', version='v1.0', description='A server that uses Langchain to translate any statement.')
  • title: The name of the service being built, customizable.

  • version: The version of the service being built, customizable.

  • description: The functionality of the service being built, customizable.

6. Add Routes

add_routes(
    app,
    chain,
    path="/chain"
)
  • app: The FastAPI application you created, here it is app.

  • chain: The created chain.

  • path: The route path, customizable.

7. Build the Server

if __name__ == "__main__":
    # Build the server
    import uvicorn
    uvicorn.run(app, host="localhost", port=8000)

8. Complete Code

# Install third-party library
pip install langserve[all]

# Call AI detection platform (langSmith)
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = 'xxxxxxxx'
os.environ["LANGCHAIN_PROJECT"] = "Zhipu AI"

# Call Zhipu AI API
os.environ["ZHIPUAI_API_KEY"] = "xxxxxxxxx"

# Install third-party library integration
# pip install langchain_community
# Call third-party library
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.output_parsers import StrOutputParser
from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langserve import add_routes

# Call large language model
model = ChatZhipuAI(model_name='glm-4-flash')

# Create a data parser for the return
parser = StrOutputParser()

# Define prompt template
prompt_template = ChatPromptTemplate.from_messages([
      ('system','Please translate the following content into {language}'),
      ('user',"{text}")])

# Get chain
chain = prompt_template | model | parser

# Call chain
print(chain.invoke({'language':'English','text':'I am very happy today because I had a hamburger for lunch.'}))

# Create FastAPI application
app = FastAPI(title='My Langchain Service', version='v1.0', description='A server that uses Langchain to translate any statement.')

# Add routes
add_routes(
    app,
    chain,
    path="/chain"
)

if __name__ == "__main__":
    # Build the server
    import uvicorn
    uvicorn.run(app, host="localhost", port=8000)

9. Running Results After running, the terminal console displays the following interface, indicating that the service has been successfully built.Developing AI Agents with Python (Part Four)10. Testing

There are two methods for testing the service interface.

1. Using third-party tools, such as Apipost.

2. Testing using code; I will introduce the second method here.

from langserve import RemoteRunnable
if __name__ == "__main__":
    client = RemoteRunnable('http://127.0.0.1:8000/chain/')
    print('Hello')

Create a new .py file, run the above code, and if there are no errors in the terminal console, it indicates that the interface test was successful.Developing AI Agents with Python (Part Four)Edit

Leave a Comment