Skip to main content
Open In ColabOpen on GitHub

Desearch

AI-Powered Search Engine and API for Advanced Data Discovery Desearch.

Welcome to the comprehensive guide on integrating Desearch with LangChain to enable advanced Retrieval-Augmented Generation (RAG) and agent-style workflows. Desearch is a cutting-edge, privacy-first search API that excels in delivering relevant, real-time web and Twitter content, specifically optimized for use in Large Language Model (LLM) applications.

First, get an Desearch API key and add it as an environment variable. Get $5 free credit (plus more by completing certain actions like making your first search) by signing up here.

.env

DESEARCH_API_KEY=your_api_key_here
from dotenv import load_dotenv
load_dotenv()
# Set your API key as an environment variable

And install the integration package

pip install -U langchain-desearch

๐Ÿ› ๏ธ Tools Overviewโ€‹

Desearch provides a suite of modular tools compatible with LangChain, each designed for specific use cases:

๐Ÿ”Ž Search Toolsโ€‹

  • DesearchTool: A comprehensive tool for searching across web, AI, and Twitter posts.
  • BasicWebSearchTool: A lightweight tool focused solely on web searches.
  • BasicTwitterSearchTool: A specialized tool for searching tweets with advanced filtering options.

๐Ÿงช Quick Examplesโ€‹

Basic Usageโ€‹

Here's a simple example to demonstrate how to use the DesearchTool for web searches

from langchain_desearch.tools import DesearchTool

# Initialize the tool
tool = DesearchTool()

# Run a search query
response = tool._run(
prompt="Bittensor network activity",
tool=["web"],
model="NOVA",
date_filter="PAST_24_HOURS"
)

# Print the response
print(response)

Search Twitterโ€‹

To search Twitter for specific topics, use the BasicTwitterSearchTool

from langchain_desearch.tools import BasicTwitterSearchTool

# Initialize the Twitter search tool
tool = BasicTwitterSearchTool()

# Execute a search query
response = tool._run(query="AI governance", sort="Top", count=5)

# Display the results
print(response)

๐Ÿค– Building a RAG Chainโ€‹

Creating a RAG chain involves integrating Desearch with LangChain's prompt and LLM capabilities

from langchain_desearch.tools import DesearchTool
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
from langchain_deepseek import ChatDeepSeek
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

# Initialize the Desearch tool
tool = DesearchTool()

# Define a function to fetch context
def fetch_context(query):
return tool._run(prompt=query, tool="desearch_web", model="NOVA")

# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a research assistant."),
("human", "Answer this using the provided context:\n\n<context>{context}</context>")
])

# Set up the LLM
llm = ChatDeepSeek(model="deepseek-chat", temperature=0.7)
parser = StrOutputParser()

# Build the RAG chain
chain = RunnableParallel({
"query": RunnablePassthrough(),
"context": lambda q: fetch_context(q["query"]),
}) | prompt | llm | parser

# Invoke the chain with a query
result = chain.invoke("What is the latest update on the EU AI Act?")
print(result)

๐Ÿง  Using Desearch in a LangChain Agentโ€‹

Integrate Desearch into a LangChain agent to automate complex workflows:

from langchain_desearch.agent import create_search_agent
from langchain_deepseek import ChatDeepSeek

# Initialize the LLM
llm = ChatDeepSeek(model="deepseek-chat", temperature=0.7)

# Create a search agent
agent = create_search_agent(llm=llm)

# Invoke the agent with an input message
response = agent.invoke({"input_message": "Summarize current AI regulation trends"})
print(response["output"])
API Reference:ChatDeepSeek

โœ… Testingโ€‹

Unit Tests (Mocked)โ€‹

To ensure your integration works as expected, run unit tests:

pytest tests/integration_tests/test_tools.py
pytest tests/unit_tests/test_imports.py

Was this page helpful?