{"id":606552,"date":"2025-07-10T14:53:09","date_gmt":"2025-07-10T19:53:09","guid":{"rendered":"https:\/\/towardsdatascience.com\/?p=606552"},"modified":"2025-07-10T14:53:27","modified_gmt":"2025-07-10T19:53:27","slug":"building-a-%d1%81ustom-mcp-chatbot","status":"publish","type":"post","link":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/","title":{"rendered":"Building a \u0421ustom MCP\u00a0Chatbot"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\"><mdspan datatext=\"el1752176987570\" class=\"mdspan-comment\"><strong>MCP (Model Context Protocol)<\/strong> is <\/mdspan>a method to standardise communication between AI applications and external tools or data sources. This standardisation helps to reduce the number of integrations needed (<em>from N*M to N+M<\/em>):&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">You can use community-built MCP servers when you need common functionality, saving time and avoiding the need to reinvent the wheel every time.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">You can also expose your own tools and resources, making them available for others to use.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">In <a href=\"https:\/\/towardsdatascience.com\/your-personal-analytics-toolbox\/\">my previous article<\/a>, we built the analytics toolbox (a collection of tools that might automate your day-to-day routine). We built an MCP server and used its capabilities with existing clients like MCP Inspector or Claude Desktop.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Now, we want to use those tools directly in our AI applications. To do that, let\u2019s build our own MCP client. We will write fairly low-level code, which will also give you a clearer picture of how tools like Claude Code interact with MCP under the hood.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Additionally, I would like to implement the feature that is currently (<em>July 2025<\/em>) missing from Claude Desktop: the ability for the LLM to automatically check whether it has a suitable prompt template for the task at hand and use it. Right now, you have to pick the template manually, which isn\u2019t very convenient.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">As a bonus, I will also share a high-level implementation using the smolagents framework, which is ideal for scenarios when you work only with MCP tools and don\u2019t need much customisation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">MCP protocol&nbsp;overview<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Here\u2019s a quick recap of the MCP to ensure we\u2019re on the same page. MCP is a protocol developed by Anthropic to standardise the way LLMs interact with the outside world.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It follows a client-server architecture and consists of three main components:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>Host<\/strong> is the user-facing application.&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>MCP client<\/strong> is a component within the host that establishes a one-to-one connection with the server and communicates using messages defined by the MCP protocol.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>MCP server<\/strong> exposes capabilities such as prompt templates, resources and tools.&nbsp;<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/07\/1emfHVLZDPjEbOpXV7azkNA-1.png\" alt=\"\" class=\"wp-image-608363\"\/><figcaption class=\"wp-element-caption\">Image by author<\/figcaption><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Since we\u2019ve already <a href=\"https:\/\/github.com\/miptgirl\/mcp-analyst-toolkit\" rel=\"noreferrer noopener\" target=\"_blank\">implemented the MCP server<\/a> before, this time we will focus on building the MCP client. We will start with a relatively simple implementation and later add the ability to dynamically select prompt templates on the fly.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\"><em>You can find the full code on <a href=\"https:\/\/github.com\/miptgirl\/miptgirl_medium\/tree\/main\/mcp_client_example\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub<\/a>.<\/em><\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Building the MCP&nbsp;chatbot<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s begin with the initial setup: we\u2019ll load the Anthropic API key from a config file and adjust Python\u2019s <code>asyncio<\/code> event loop to support nested event loops.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\"># Load configuration and environment\nwith open(&#039;..\/..\/config.json&#039;) as f:\n    config = json.load(f)\nos.environ[&quot;ANTHROPIC_API_KEY&quot;] = config[&#039;ANTHROPIC_API_KEY&#039;]\n\nnest_asyncio.apply()<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s start by building a skeleton of our program to get a clear picture of the application\u2019s high-level architecture.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def main():\n    &quot;&quot;&quot;Main entry point for the MCP ChatBot application.&quot;&quot;&quot;\n    chatbot = MCP_ChatBot()\n    try:\n        await chatbot.connect_to_servers()\n        await chatbot.chat_loop()\n    finally:\n        await chatbot.cleanup()\n\nif __name__ == &quot;__main__&quot;:\n    asyncio.run(main())<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">We start by creating an instance of the <code>MCP_ChatBot<\/code> class. The chatbot starts by discovering available MCP capabilities (iterating through all configured MCP servers, establishing connections and requesting their lists of capabilities).&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Once connections are set up, we will initialise an infinite loop where the chatbot listens to the user queries, calls tools when needed and continues this cycle until the process is stopped manually.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Finally, we will perform a cleanup step to close all open connections.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s now walk through each stage in more detail.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Initialising the ChatBot&nbsp;class<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s start by creating the class and defining the <code>__init__<\/code> method. The main fields of the ChatBot class are:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><code>exit_stack<\/code> manages the lifecycle of multiple async threads (connections to MCP servers), ensuring that all connections will be closed appropriately, even if we face an error during execution. This logic is implemented in the <code>cleanup<\/code> function.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><code>anthropic<\/code> is a client for Anthropic API used to send messages to LLM.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><code>available_tools<\/code> and <code>available_prompts<\/code> are the lists of tools and prompts exposed by all MCP servers we are connected to.&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><code>sessions<\/code> is a mapping of tools, prompts and resources to their respective MCP sessions. This allows the chatbot to route requests to the correct MCP server when the LLM selects a specific tool.<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">class MCP_ChatBot:\n  &quot;&quot;&quot;\n  MCP (Model Context Protocol) ChatBot that connects to multiple MCP servers\n  and provides a conversational interface using Anthropic&#039;s Claude.\n    \n  Supports tools, prompts, and resources from connected MCP servers.\n  &quot;&quot;&quot;\n    \n  def __init__(self):\n    self.exit_stack = AsyncExitStack() \n    self.anthropic = Anthropic() # Client for Anthropic API\n    self.available_tools = [] # Tools from all connected servers\n    self.available_prompts = [] # Prompts from all connected servers  \n    self.sessions = {} # Maps tool\/prompt\/resource names to MCP sessions\n\n  async def cleanup(self):\n    &quot;&quot;&quot;Clean up resources and close all connections.&quot;&quot;&quot;\n    await self.exit_stack.aclose()<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Connecting to&nbsp;servers<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">The first task for our chatbot is to initiate connections with all configured MCP servers and discover what capabilities we can use.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The list of MCP servers that our agent can connect to is defined in the <code>server_config.json<\/code> file. I\u2019ve set up connections with three MCP servers:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/github.com\/miptgirl\/mcp-analyst-toolkit\" target=\"_blank\" rel=\"noreferrer noopener\">analyst_toolkit<\/a> is my implementation of the everyday analytical tools we discussed in the previous article,&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/github.com\/modelcontextprotocol\/servers\/tree\/main\/src\/filesystem\" target=\"_blank\" rel=\"noreferrer noopener\">Filesystem<\/a> allows the agent to work with files,<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/github.com\/modelcontextprotocol\/servers\/tree\/main\/src\/fetch\" target=\"_blank\" rel=\"noreferrer noopener\">Fetch<\/a> helps LLMs retrieve the content of webpages and convert it from HTML to markdown for better readability.<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-json\">{\n  &quot;mcpServers&quot;: {\n    &quot;analyst_toolkit&quot;: {\n      &quot;command&quot;: &quot;uv&quot;,\n      &quot;args&quot;: [\n        &quot;--directory&quot;,\n        &quot;\/path\/to\/github\/mcp-analyst-toolkit\/src\/mcp_server&quot;,\n        &quot;run&quot;,\n        &quot;server.py&quot;\n      ],\n      &quot;env&quot;: {\n          &quot;GITHUB_TOKEN&quot;: &quot;your_github_token&quot;\n      }\n    },\n    &quot;filesystem&quot;: {\n      &quot;command&quot;: &quot;npx&quot;,\n      &quot;args&quot;: [\n        &quot;-y&quot;,\n        &quot;@modelcontextprotocol\/server-filesystem&quot;,\n        &quot;\/Users\/marie\/Desktop&quot;,\n        &quot;\/Users\/marie\/Documents\/github&quot;\n      ]\n    },\n    &quot;fetch&quot;: {\n        &quot;command&quot;: &quot;uvx&quot;,\n        &quot;args&quot;: [&quot;mcp-server-fetch&quot;]\n      }\n  }\n}<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">First, we will read the config file, parse it and then connect to each listed server.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def connect_to_servers(self):\n  &quot;&quot;&quot;Load server configuration and connect to all configured MCP servers.&quot;&quot;&quot;\n  try:\n    with open(&quot;server_config.json&quot;, &quot;r&quot;) as file:\n      data = json.load(file)\n    \n    servers = data.get(&quot;mcpServers&quot;, {})\n    for server_name, server_config in servers.items():\n      await self.connect_to_server(server_name, server_config)\n  except Exception as e:\n    print(f&quot;Error loading server config: {e}&quot;)\n    traceback.print_exc()\n    raise<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">For each server, we perform several steps to establish the connection:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>At the transport level, <\/strong>we<strong> <\/strong>launch the MCP server as a stdio process and get streams for sending and receiving messages.&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>At the session level<\/strong>, we create a <code>ClientSession<\/code> incorporating the streams, and then we perform the MCP handshake by calling <code>initialize<\/code> method.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">We registered both the session and transport objects in the context manager <code>exit_stack<\/code> to ensure that all connections will be closed properly in the end.&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\">The last step is to <strong>register server capabilities<\/strong>. We wrapped this functionality into a separate function, and we will discuss it shortly.<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def connect_to_server(self, server_name, server_config):\n    &quot;&quot;&quot;Connect to a single MCP server and register its capabilities.&quot;&quot;&quot;\n    try:\n      server_params = StdioServerParameters(**server_config)\n      stdio_transport = await self.exit_stack.enter_async_context(\n          stdio_client(server_params)\n      )\n      read, write = stdio_transport\n      session = await self.exit_stack.enter_async_context(\n          ClientSession(read, write)\n      )\n      await session.initialize()\n      await self._register_server_capabilities(session, server_name)\n            \n    except Exception as e:\n      print(f&quot;Error connecting to {server_name}: {e}&quot;)\n      traceback.print_exc()<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Registering capabilities involves iterating over all the tools, prompts and resources retrieved from the session. As a result, we update the internal variables <code>sessions<\/code> (<em>mapping between resources and a particular session between the MCP client and server<\/em>), <code>available_prompts<\/code> and <code>available_tools<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def _register_server_capabilities(self, session, server_name):\n  &quot;&quot;&quot;Register tools, prompts and resources from a single server.&quot;&quot;&quot;\n  capabilities = [\n    (&quot;tools&quot;, session.list_tools, self._register_tools),\n    (&quot;prompts&quot;, session.list_prompts, self._register_prompts), \n    (&quot;resources&quot;, session.list_resources, self._register_resources)\n  ]\n  \n  for capability_name, list_method, register_method in capabilities:\n    try:\n      response = await list_method()\n      await register_method(response, session)\n    except Exception as e:\n      print(f&quot;Server {server_name} doesn&#039;t support {capability_name}: {e}&quot;)\n\nasync def _register_tools(self, response, session):\n  &quot;&quot;&quot;Register tools from server response.&quot;&quot;&quot;\n  for tool in response.tools:\n    self.sessions[tool.name] = session\n    self.available_tools.append({\n        &quot;name&quot;: tool.name,\n        &quot;description&quot;: tool.description,\n        &quot;input_schema&quot;: tool.inputSchema\n    })\n\nasync def _register_prompts(self, response, session):\n  &quot;&quot;&quot;Register prompts from server response.&quot;&quot;&quot;\n  if response and response.prompts:\n    for prompt in response.prompts:\n        self.sessions[prompt.name] = session\n        self.available_prompts.append({\n            &quot;name&quot;: prompt.name,\n            &quot;description&quot;: prompt.description,\n            &quot;arguments&quot;: prompt.arguments\n        })\n\nasync def _register_resources(self, response, session):\n  &quot;&quot;&quot;Register resources from server response.&quot;&quot;&quot;\n  if response and response.resources:\n    for resource in response.resources:\n        resource_uri = str(resource.uri)\n        self.sessions[resource_uri] = session<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">By the end of this stage, our <code>MCP_ChatBot<\/code> object has everything it needs to start interacting with users:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">connections to all configured MCP servers are established,<\/li>\n\n\n\n<li class=\"wp-block-list-item\">all prompts, resources and tools are registered, including descriptions needed for LLM to understand how to use these capabilities,<\/li>\n\n\n\n<li class=\"wp-block-list-item\">mappings between these resources and their respective sessions are stored, so we know exactly where to send each request.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Chat loop<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">So, it\u2019s time to start our chat with users by creating the <code>chat_loop<\/code> function.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">We will first share all the available commands with the user:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">listing resources, tools and prompts&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\">executing a tool call&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\">viewing a resource&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\">using a prompt template<\/li>\n\n\n\n<li class=\"wp-block-list-item\">quitting the chat (<em>it\u2019s important to have a clear way to exit the infinite loop<\/em>).<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">After that, we will enter an infinite loop where, based on user input, we will execute the appropriate action: whether it\u2019s one of the commands above or making a request to the LLM.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def chat_loop(self):\n  &quot;&quot;&quot;Main interactive chat loop with command processing.&quot;&quot;&quot;\n  print(&quot;\\nMCP Chatbot Started!&quot;)\n  print(&quot;Commands:&quot;)\n  print(&quot;  quit                           - Exit the chatbot&quot;)\n  print(&quot;  @periods                       - Show available changelog periods&quot;) \n  print(&quot;  @&lt;period&gt;                      - View changelog for specific period&quot;)\n  print(&quot;  \/tools                         - List available tools&quot;)\n  print(&quot;  \/tool &lt;name&gt; &lt;arg1=value1&gt;     - Execute a tool with arguments&quot;)\n  print(&quot;  \/prompts                       - List available prompts&quot;)\n  print(&quot;  \/prompt &lt;name&gt; &lt;arg1=value1&gt;   - Execute a prompt with arguments&quot;)\n  \n  while True:\n    try:\n      query = input(&quot;\\nQuery: &quot;).strip()\n      if not query:\n          continue\n\n      if query.lower() == &#039;quit&#039;:\n          break\n      \n      # Handle resource requests (@command)\n      if query.startswith(&#039;@&#039;):\n        period = query[1:]\n        resource_uri = &quot;changelog:\/\/periods&quot; if period == &quot;periods&quot; else f&quot;changelog:\/\/{period}&quot;\n        await self.get_resource(resource_uri)\n        continue\n      \n      # Handle slash commands\n      if query.startswith(&#039;\/&#039;):\n        parts = self._parse_command_arguments(query)\n        if not parts:\n          continue\n            \n        command = parts[0].lower()\n        \n        if command == &#039;\/tools&#039;:\n          await self.list_tools()\n        elif command == &#039;\/tool&#039;:\n          if len(parts) &lt; 2:\n            print(&quot;Usage: \/tool &lt;name&gt; &lt;arg1=value1&gt; &lt;arg2=value2&gt;&quot;)\n            continue\n            \n          tool_name = parts[1]\n          args = self._parse_prompt_arguments(parts[2:])\n          await self.execute_tool(tool_name, args)\n        elif command == &#039;\/prompts&#039;:\n          await self.list_prompts()\n        elif command == &#039;\/prompt&#039;:\n          if len(parts) &lt; 2:\n            print(&quot;Usage: \/prompt &lt;name&gt; &lt;arg1=value1&gt; &lt;arg2=value2&gt;&quot;)\n            continue\n          \n          prompt_name = parts[1]\n          args = self._parse_prompt_arguments(parts[2:])\n          await self.execute_prompt(prompt_name, args)\n        else:\n          print(f&quot;Unknown command: {command}&quot;)\n        continue\n      \n      # Process regular queries\n      await self.process_query(query)\n            \n    except Exception as e:\n      print(f&quot;\\nError in chat loop: {e}&quot;)\n      traceback.print_exc()<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">There are a bunch of helper functions to parse arguments and return the lists of available tools and prompts we registered earlier. Since it\u2019s fairly straightforward, I won\u2019t go into much detail here. You can check <a href=\"https:\/\/github.com\/miptgirl\/miptgirl_medium\/blob\/main\/mcp_client_example\/mcp_client_example_base.py\" rel=\"noreferrer noopener\" target=\"_blank\">the full code<\/a> if you are interested.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Instead, let\u2019s dive deeper into how the interactions between the MCP client and server work in different scenarios.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">When working with resources, we use the <code>self.sessions<\/code> mapping to find the appropriate session (with a fallback option if needed) and then use that session to read the resource.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def get_resource(self, resource_uri):\n  &quot;&quot;&quot;Retrieve and display content from an MCP resource.&quot;&quot;&quot;\n  session = self.sessions.get(resource_uri)\n  \n  # Fallback: find any session that handles this resource type\n  if not session and resource_uri.startswith(&quot;changelog:\/\/&quot;):\n    session = next(\n        (sess for uri, sess in self.sessions.items() \n         if uri.startswith(&quot;changelog:\/\/&quot;)), \n        None\n    )\n      \n  if not session:\n    print(f&quot;Resource &#039;{resource_uri}&#039; not found.&quot;)\n    return\n\n  try:\n    result = await session.read_resource(uri=resource_uri)\n    if result and result.contents:\n        print(f&quot;\\nResource: {resource_uri}&quot;)\n        print(&quot;Content:&quot;)\n        print(result.contents[0].text)\n    else:\n        print(&quot;No content available.&quot;)\n  except Exception as e:\n    print(f&quot;Error reading resource: {e}&quot;)\n    traceback.print_exc()<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">To execute a tool, we follow a similar process: start by finding the session and then use it to call the tool, passing its name and arguments.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def execute_tool(self, tool_name, args):\n  &quot;&quot;&quot;Execute an MCP tool directly with given arguments.&quot;&quot;&quot;\n  session = self.sessions.get(tool_name)\n  if not session:\n      print(f&quot;Tool &#039;{tool_name}&#039; not found.&quot;)\n      return\n  \n  try:\n      result = await session.call_tool(tool_name, arguments=args)\n      print(f&quot;\\nTool &#039;{tool_name}&#039; result:&quot;)\n      print(result.content)\n  except Exception as e:\n      print(f&quot;Error executing tool: {e}&quot;)\n      traceback.print_exc()<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">No surprise here. The same approach works for executing the prompt.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def execute_prompt(self, prompt_name, args):\n    &quot;&quot;&quot;Execute an MCP prompt with given arguments and process the result.&quot;&quot;&quot;\n    session = self.sessions.get(prompt_name)\n    if not session:\n        print(f&quot;Prompt &#039;{prompt_name}&#039; not found.&quot;)\n        return\n    \n    try:\n        result = await session.get_prompt(prompt_name, arguments=args)\n        if result and result.messages:\n            prompt_content = result.messages[0].content\n            text = self._extract_prompt_text(prompt_content)\n            \n            print(f&quot;\\nExecuting prompt &#039;{prompt_name}&#039;...&quot;)\n            await self.process_query(text)\n    except Exception as e:\n        print(f&quot;Error executing prompt: {e}&quot;)\n        traceback.print_exc()<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">The only major use case we haven\u2019t covered yet is handling a general, free-form input from a user (not one of specific commands).&nbsp;<br>In this case, we send the initial request to the LLM first, then we parse the output, defining whether there are any tool calls. If tool calls are present, we execute them. Otherwise, we exit the infinite loop and return the answer to the user.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def process_query(self, query):\n  &quot;&quot;&quot;Process a user query through Anthropic&#039;s Claude, handling tool calls iteratively.&quot;&quot;&quot;\n  messages = [{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: query}]\n  \n  while True:\n    response = self.anthropic.messages.create(\n        max_tokens=2024,\n        model=&#039;claude-3-7-sonnet-20250219&#039;, \n        tools=self.available_tools,\n        messages=messages\n    )\n    \n    assistant_content = []\n    has_tool_use = False\n    \n    for content in response.content:\n        if content.type == &#039;text&#039;:\n            print(content.text)\n            assistant_content.append(content)\n        elif content.type == &#039;tool_use&#039;:\n            has_tool_use = True\n            assistant_content.append(content)\n            messages.append({&#039;role&#039;: &#039;assistant&#039;, &#039;content&#039;: assistant_content})\n            \n            # Execute the tool call\n            session = self.sessions.get(content.name)\n            if not session:\n                print(f&quot;Tool &#039;{content.name}&#039; not found.&quot;)\n                break\n                \n            result = await session.call_tool(content.name, arguments=content.input)\n            messages.append({\n                &quot;role&quot;: &quot;user&quot;, \n                &quot;content&quot;: [{\n                    &quot;type&quot;: &quot;tool_result&quot;,\n                    &quot;tool_use_id&quot;: content.id,\n                    &quot;content&quot;: result.content\n                }]\n            })\n      \n      if not has_tool_use:\n          break<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">So, we have now fully covered how the MCP chatbot actually works under the hood. Now, it\u2019s time to test it in action. You can run it from the command line interface with the following command.&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-bash\">python mcp_client_example_base.py<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">When you run the chatbot, you\u2019ll first see the following introduction message outlining potential options:<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-markup\">MCP Chatbot Started!\nCommands:\n  quit                           - Exit the chatbot\n  @periods                       - Show available changelog periods\n  @&lt;period&gt;                      - View changelog for specific period\n  \/tools                         - List available tools\n  \/tool &lt;name&gt; &lt;arg1=value1&gt;     - Execute a tool with arguments\n  \/prompts                       - List available prompts\n  \/prompt &lt;name&gt; &lt;arg1=value1&gt;   - Execute a prompt with arguments<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">From there, you can try out different commands, for example,&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">call the tool to list the databases available in the DB<\/li>\n\n\n\n<li class=\"wp-block-list-item\">list all available prompts&nbsp;<\/li>\n\n\n\n<li class=\"wp-block-list-item\">use the prompt template, calling it like this <code>\/prompt sql_query_prompt question=\u201dHow many customers did we have in May 2024?\u201d<\/code>.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Finally, I can finish your chat by typing <code>quit<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-markup\">Query: \/tool list_databases\n[07\/02\/25 18:27:28] INFO     Processing request of type CallToolRequest                server.py:619\nTool &#039;list_databases&#039; result:\n[TextContent(type=&#039;text&#039;, text=&#039;INFORMATION_SCHEMA\\ndatasets\\ndefault\\necommerce\\necommerce_db\\ninformation_schema\\nsystem\\n&#039;, annotations=None, meta=None)]\n\nQuery: \/prompts\nAvailable prompts:\n- sql_query_prompt: Create a SQL query prompt\n  Arguments:\n    - question\n\nQuery: \/prompt sql_query_prompt question=&quot;How many customers did we have in May 2024?&quot;\n[07\/02\/25 18:28:21] INFO     Processing request of type GetPromptRequest               server.py:619\nExecuting prompt &#039;sql_query_prompt&#039;...\nI&#039;ll create a SQL query to find the number of customers in May 2024.\n[07\/02\/25 18:28:25] INFO     Processing request of type CallToolRequest                server.py:619\nBased on the query results, here&#039;s the final SQL query:\n```sql\nselect uniqExact(user_id) as customer_count\nfrom ecommerce.sessions\nwhere toStartOfMonth(action_date) = &#039;2024-05-01&#039;\nformat TabSeparatedWithNames\n```\nQuery: \/tool execute_sql_query query=&quot;select uniqExact(user_id) as customer_count from ecommerce.sessions where toStartOfMonth(action_date) = &#039;2024-05-01&#039; format TabSeparatedWithNames&quot;\nI&#039;ll help you execute this SQL query to get the unique customer count for May 2024. Let me run this for you.\n[07\/02\/25 18:30:09] INFO     Processing request of type CallToolRequest                server.py:619\nThe query has been executed successfully. The results show that there were 246,852 unique customers (unique user_ids) in May 2024 based on the ecommerce.sessions table.\n\nQuery: quit<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Looks pretty cool! Our basic version is working well! Now, it\u2019s time to take it one step further and make our chatbot smarter by teaching it to suggest relevant prompts on the fly based on customer input.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Prompt suggestions<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">In practice, suggesting prompt templates that best match the user\u2019s task can be incredibly helpful. Right now, users of our chatbot need to either already know about available prompts or at least be curious enough to explore them on their own to benefit from what we\u2019ve built. By adding a prompt suggestions feature, we can do this discovery for our users and make our chatbot significantly more convenient and user-friendly.<br><br>Let\u2019s brainstorm ways to add this functionality. I would approach this feature in the following way:<br><br><strong>Evaluate the relevance of the prompts using the LLM. <\/strong>Iterate through all available prompt templates and, for each one, assess whether the prompt is a good match for the user\u2019s query.<br><br><strong>Suggest a matching prompt to the user.<\/strong> If we found the relevant prompt template, share it with the user and ask whether they would like to execute it.&nbsp;<br><br><strong>Merge the prompt template with the user input.<\/strong> If the user accepts, combine the selected prompt with the original query. Since prompt templates have placeholders, we might need the LLM to fill them in. Once we\u2019ve merged the prompt template with the user\u2019s query, we\u2019ll have an updated message ready to send to the LLM.<br><br>We will add this logic to the <code>process_query<\/code> function. Thanks to our modular design, it\u2019s pretty easy to add this enhancement without disrupting the rest of the code.&nbsp;<br><br>Let\u2019s start by implementing a function to find the most relevant prompt template. We will use the LLM to evaluate each prompt and assign it a relevance score from 0 to 5. After that, we\u2019ll filter out any prompts with a score of 2 or lower and return only the most relevant one (the one with the highest relevance score among the remaining results).<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def _find_matching_prompt(self, query):\n  &quot;&quot;&quot;Find a matching prompt for the given query using LLM evaluation.&quot;&quot;&quot;\n  if not self.available_prompts:\n    return None\n  \n  # Use LLM to evaluate prompt relevance\n  prompt_scores = []\n  \n  for prompt in self.available_prompts:\n    # Create evaluation prompt for the LLM\n    evaluation_prompt = f&quot;&quot;&quot;\nYou are an expert at evaluating whether a prompt template is relevant for a user query.\n\nUser Query: &quot;{query}&quot;\n\nPrompt Template:\n- Name: {prompt[&#039;name&#039;]}\n- Description: {prompt[&#039;description&#039;]}\n\nRate the relevance of this prompt template for the user query on a scale of 0-5:\n- 0: Completely irrelevant\n- 1: Slightly relevant\n- 2: Somewhat relevant  \n- 3: Moderately relevant\n- 4: Highly relevant\n- 5: Perfect match\n\nConsider:\n- Does the prompt template address the user&#039;s intent?\n- Would using this prompt template provide a better response than a generic query?\n- Are the topics and context aligned?\n\nRespond with only a single number (0-5) and no other text.\n&quot;&quot;&quot;\n      \n    try:\n      response = self.anthropic.messages.create(\n          max_tokens=10,\n          model=&#039;claude-3-7-sonnet-20250219&#039;,\n          messages=[{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: evaluation_prompt}]\n      )\n      \n      # Extract the score from the response\n      score_text = response.content[0].text.strip()\n      score = int(score_text)\n      \n      if score &gt;= 3:  # Only consider prompts with score &gt;= 3\n          prompt_scores.append((prompt, score))\n            \n    except Exception as e:\n        print(f&quot;Error evaluating prompt {prompt[&#039;name&#039;]}: {e}&quot;)\n        continue\n  \n  # Return the prompt with the highest score\n  if prompt_scores:\n      best_prompt, best_score = max(prompt_scores, key=lambda x: x[1])\n      return best_prompt\n  \n  return None<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">The next function we need to implement is one that combines the selected prompt template with the user input. We will rely on the LLM to intelligently combine them, filling all placeholders as needed.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def _combine_prompt_with_query(self, prompt_name, user_query):\n  &quot;&quot;&quot;Use LLM to combine prompt template with user query.&quot;&quot;&quot;\n  # First, get the prompt template content\n  session = self.sessions.get(prompt_name)\n  if not session:\n      print(f&quot;Prompt &#039;{prompt_name}&#039; not found.&quot;)\n      return None\n  \n  try:\n      # Find the prompt definition to get its arguments\n      prompt_def = None\n      for prompt in self.available_prompts:\n          if prompt[&#039;name&#039;] == prompt_name:\n              prompt_def = prompt\n              break\n      \n      # Prepare arguments for the prompt template\n      args = {}\n      if prompt_def and prompt_def.get(&#039;arguments&#039;):\n          for arg in prompt_def[&#039;arguments&#039;]:\n              arg_name = arg.name if hasattr(arg, &#039;name&#039;) else arg.get(&#039;name&#039;, &#039;&#039;)\n              if arg_name:\n                  # Use placeholder format for arguments\n                  args[arg_name] = &#039;&lt;&#039; + str(arg_name) + &#039;&gt;&#039;\n      \n      # Get the prompt template with arguments\n      result = await session.get_prompt(prompt_name, arguments=args)\n      if not result or not result.messages:\n          print(f&quot;Could not retrieve prompt template for &#039;{prompt_name}&#039;&quot;)\n          return None\n      \n      prompt_content = result.messages[0].content\n      prompt_text = self._extract_prompt_text(prompt_content)\n      \n      # Create combination prompt for the LLM\n      combination_prompt = f&quot;&quot;&quot;\nYou are an expert at combining prompt templates with user queries to create optimized prompts.\n\nOriginal User Query: &quot;{user_query}&quot;\n\nPrompt Template:\n{prompt_text}\n\nYour task:\n1. Analyze the user&#039;s query and the prompt template\n2. Combine them intelligently to create a single, coherent prompt\n3. Ensure the user&#039;s specific question\/request is addressed within the context of the template\n4. Maintain the structure and intent of the template while incorporating the user&#039;s query\n\nRespond with only the combined prompt text, no explanations or additional text.\n&quot;&quot;&quot;\n      \n      response = self.anthropic.messages.create(\n          max_tokens=2048,\n          model=&#039;claude-3-7-sonnet-20250219&#039;,\n          messages=[{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: combination_prompt}]\n      )\n      \n      return response.content[0].text.strip()\n      \n  except Exception as e:\n      print(f&quot;Error combining prompt with query: {e}&quot;)\n      return None<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Then, we will simply update the <code>process_query<\/code> logic to check for matching prompts, ask the user for confirmation and decide which message to send to the LLM.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">async def process_query(self, query):\n  &quot;&quot;&quot;Process a user query through Anthropic&#039;s Claude, handling tool calls iteratively.&quot;&quot;&quot;\n  # Check if there&#039;s a matching prompt first\n  matching_prompt = await self._find_matching_prompt(query)\n  \n  if matching_prompt:\n    print(f&quot;Found matching prompt: {matching_prompt[&#039;name&#039;]}&quot;)\n    print(f&quot;Description: {matching_prompt[&#039;description&#039;]}&quot;)\n    \n    # Ask user if they want to use the prompt template\n    use_prompt = input(&quot;Would you like to use this prompt template? (y\/n): &quot;).strip().lower()\n    \n    if use_prompt == &#039;y&#039; or use_prompt == &#039;yes&#039;:\n        print(&quot;Combining prompt template with your query...&quot;)\n        \n        # Use LLM to combine prompt template with user query\n        combined_prompt = await self._combine_prompt_with_query(matching_prompt[&#039;name&#039;], query)\n        \n        if combined_prompt:\n            print(f&quot;Combined prompt created. Processing...&quot;)\n            # Process the combined prompt instead of the original query\n            messages = [{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: combined_prompt}]\n        else:\n            print(&quot;Failed to combine prompt template. Using original query.&quot;)\n            messages = [{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: query}]\n    else:\n        # Use original query if user doesn&#039;t want to use the prompt\n        messages = [{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: query}]\n  else:\n    # Process the original query if no matching prompt found\n    messages = [{&#039;role&#039;: &#039;user&#039;, &#039;content&#039;: query}]\n\n  # print(messages)\n  \n  # Process the final query (either original or combined)\n  while True:\n    response = self.anthropic.messages.create(\n        max_tokens=2024,\n        model=&#039;claude-3-7-sonnet-20250219&#039;, \n        tools=self.available_tools,\n        messages=messages\n    )\n    \n    assistant_content = []\n    has_tool_use = False\n    \n    for content in response.content:\n      if content.type == &#039;text&#039;:\n          print(content.text)\n          assistant_content.append(content)\n      elif content.type == &#039;tool_use&#039;:\n          has_tool_use = True\n          assistant_content.append(content)\n          messages.append({&#039;role&#039;: &#039;assistant&#039;, &#039;content&#039;: assistant_content})\n          \n          # Log tool call information\n          print(f&quot;\\n[TOOL CALL] Tool: {content.name}&quot;)\n          print(f&quot;[TOOL CALL] Arguments: {json.dumps(content.input, indent=2)}&quot;)\n          \n          # Execute the tool call\n          session = self.sessions.get(content.name)\n          if not session:\n              print(f&quot;Tool &#039;{content.name}&#039; not found.&quot;)\n              break\n              \n          result = await session.call_tool(content.name, arguments=content.input)\n          \n          # Log tool result\n          print(f&quot;[TOOL RESULT] Tool: {content.name}&quot;)\n          print(f&quot;[TOOL RESULT] Content: {result.content}&quot;)\n          \n          messages.append({\n              &quot;role&quot;: &quot;user&quot;, \n              &quot;content&quot;: [{\n                  &quot;type&quot;: &quot;tool_result&quot;,\n                  &quot;tool_use_id&quot;: content.id,\n                  &quot;content&quot;: result.content\n              }]\n          })\n      \n    if not has_tool_use:\n        break<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Now, let\u2019s test our updated version with a question about our data. Excitingly, the chatbot was able to find the right prompt and use it to find the right answer.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-markup\">Query: How many customers did we have in May 2024?\nFound matching prompt: sql_query_prompt\nDescription: Create a SQL query prompt\nWould you like to use this prompt template? (y\/n): y\nCombining prompt template with your query...\n[07\/05\/25 14:38:58] INFO     Processing request of type GetPromptRequest               server.py:619\nCombined prompt created. Processing...\nI&#039;ll write a query to count unique customers who had sessions in May 2024. Since this is a business metric, I&#039;ll exclude fraudulent sessions.\n\n[TOOL CALL] Tool: execute_sql_query\n[TOOL CALL] Arguments: {\n  &quot;query&quot;: &quot;\/* Count distinct users with non-fraudulent sessions in May 2024\\n   Using uniqExact for precise user count\\n   Filtering for May 2024 using toStartOfMonth and adding date range *\/\\nSELECT \\n    uniqExactIf(s.user_id, s.is_fraud = 0) AS active_customers_count\\nFROM ecommerce.sessions s\\nWHERE toStartOfMonth(action_date) = toDate(&#039;2024-05-01&#039;)\\nFORMAT TabSeparatedWithNames&quot;\n}\n[07\/05\/25 14:39:17] INFO     Processing request of type CallToolRequest                server.py:619\n[TOOL RESULT] Tool: execute_sql_query\n[TOOL RESULT] Content: [TextContent(type=&#039;text&#039;, text=&#039;active_customers_count\\n245287\\n&#039;, annotations=None, meta=None)]\nThe query shows we had 245,287 unique customers with legitimate (non-fraudulent) sessions in May 2024. Here&#039;s a breakdown of why I wrote the query this way:\n\n1. Used uniqExactIf() to get precise count of unique users while excluding fraudulent sessions in one step\n2. Used toStartOfMonth() to ensure we capture all days in May 2024\n3. Specified the date format properly with toDate(&#039;2024-05-01&#039;)\n4. Used TabSeparatedWithNames format as required\n5. Provided a meaningful column alias\n\nWould you like to see any variations of this analysis, such as including fraudulent sessions or breaking down the numbers by country?<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">It\u2019s always a good idea to test negative examples as well. In this case, the chatbot behaves as expected and doesn\u2019t suggest an SQL-related prompt when given an unrelated question.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-markup\">Query: How are you?\nI should note that I&#039;m an AI assistant focused on helping you work with the available tools, which include executing SQL queries, getting database\/table information, and accessing GitHub PR data. I don&#039;t have a tool specifically for responding to personal questions.\n\nI can help you:\n- Query a ClickHouse database\n- List databases and describe tables\n- Get information about GitHub Pull Requests\n\nWhat would you like to know about these areas?<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Now that our chatbot is up and running, we\u2019re ready to wrap things up.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">BONUS: quick and easy MCP client with smolagents<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">We\u2019ve looked at low-level code that enables building highly customised MCP clients, but many use cases require only basic functionality. So, I decided to share with you a quick and straightforward implementation for scenarios when you need just the tools. We will use one of my favourite agent frameworks\u200a\u2014\u200asmolagents from HuggingFace (<em>I\u2019ve discussed this framework in detail in <\/em><a href=\"https:\/\/towardsdatascience.com\/code-agents-the-future-of-agentic-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>my previous article<\/em><\/a>).<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\"># needed imports\nfrom smolagents import CodeAgent, DuckDuckGoSearchTool, LiteLLMModel, VisitWebpageTool, ToolCallingAgent, ToolCollection\nfrom mcp import StdioServerParameters\nimport json\nimport os\n\n# setting OpenAI APIKey \nwith open(&#039;..\/..\/config.json&#039;) as f:\n    config = json.loads(f.read())\n\nos.environ[&quot;OPENAI_API_KEY&quot;] = config[&#039;OPENAI_API_KEY&#039;]\n\n# defining the LLM \nmodel = LiteLLMModel(\n    model_id=&quot;openai\/gpt-4o-mini&quot;,  \n    max_tokens=2048\n)\n\n# configuration for the MCP server\nserver_parameters = StdioServerParameters(\n    command=&quot;uv&quot;,\n    args=[\n        &quot;--directory&quot;,\n        &quot;\/path\/to\/github\/mcp-analyst-toolkit\/src\/mcp_server&quot;,\n        &quot;run&quot;,\n        &quot;server.py&quot;\n    ],\n    env={&quot;GITHUB_TOKEN&quot;: &quot;github_&lt;your_token&gt;&quot;},\n)\n\n# prompt \nCLICKHOUSE_PROMPT_TEMPLATE = &quot;&quot;&quot;\nYou are a senior data analyst with more than 10 years of experience writing complex SQL queries, specifically optimized for ClickHouse to answer user questions.\n\n## Database Schema\n\nYou are working with an e-commerce analytics database containing the following tables:\n\n### Table: ecommerce.users \n**Description:** Customer information for the online shop\n**Primary Key:** user_id\n**Fields:** \n- user_id (Int64) - Unique customer identifier (e.g., 1000004, 3000004)\n- country (String) - Customer&#039;s country of residence (e.g., &quot;Netherlands&quot;, &quot;United Kingdom&quot;)\n- is_active (Int8) - Customer status: 1 = active, 0 = inactive\n- age (Int32) - Customer age in full years (e.g., 31, 72)\n\n### Table: ecommerce.sessions \n**Description:** User session data and transaction records\n**Primary Key:** session_id\n**Foreign Key:** user_id (references ecommerce.users.user_id)\n**Fields:** \n- user_id (Int64) - Customer identifier linking to users table (e.g., 1000004, 3000004)\n- session_id (Int64) - Unique session identifier (e.g., 106, 1023)\n- action_date (Date) - Session start date (e.g., &quot;2021-01-03&quot;, &quot;2024-12-02&quot;)\n- session_duration (Int32) - Session duration in seconds (e.g., 125, 49)\n- os (String) - Operating system used (e.g., &quot;Windows&quot;, &quot;Android&quot;, &quot;iOS&quot;, &quot;MacOS&quot;)\n- browser (String) - Browser used (e.g., &quot;Chrome&quot;, &quot;Safari&quot;, &quot;Firefox&quot;, &quot;Edge&quot;)\n- is_fraud (Int8) - Fraud indicator: 1 = fraudulent session, 0 = legitimate\n- revenue (Float64) - Purchase amount in USD (0.0 for non-purchase sessions, &gt;0 for purchases)\n\n## ClickHouse-Specific Guidelines\n\n1. **Use ClickHouse-optimized functions:**\n   - uniqExact() for precise unique counts\n   - uniqExactIf() for conditional unique counts\n   - quantile() functions for percentiles\n   - Date functions: toStartOfMonth(), toStartOfYear(), today()\n\n2. **Query formatting requirements:**\n   - Always end queries with &quot;format TabSeparatedWithNames&quot;\n   - Use meaningful column aliases\n   - Use proper JOIN syntax when combining tables\n   - Wrap date literals in quotes (e.g., &#039;2024-01-01&#039;)\n\n3. **Performance considerations:**\n   - Use appropriate WHERE clauses to filter data\n   - Consider using HAVING for post-aggregation filtering\n   - Use LIMIT when finding top\/bottom results\n\n4. **Data interpretation:**\n   - revenue &gt; 0 indicates a purchase session\n   - revenue = 0 indicates a browsing session without purchase\n   - is_fraud = 1 sessions should typically be excluded from business metrics unless specifically analyzing fraud\n\n## Response Format\nProvide only the SQL query as your answer. Include brief reasoning in comments if the query logic is complex. \n\n## Examples\n\n**Question:** How many customers made purchase in December 2024?\n**Answer:** select uniqExact(user_id) as customers from ecommerce.sessions where toStartOfMonth(action_date) = &#039;2024-12-01&#039; and revenue &gt; 0 format TabSeparatedWithNames\n\n**Question:** What was the fraud rate in 2023, expressed as a percentage?\n**Answer:** select 100 * uniqExactIf(user_id, is_fraud = 1) \/ uniqExact(user_id) as fraud_rate from ecommerce.sessions where toStartOfYear(action_date) = &#039;2023-01-01&#039; format TabSeparatedWithNames\n\n**Question:** What was the share of users using Windows yesterday?\n**Answer:** select 100 * uniqExactIf(user_id, os = &#039;Windows&#039;) \/ uniqExact(user_id) as windows_share from ecommerce.sessions where action_date = today() - 1 format TabSeparatedWithNames\n\n**Question:** What was the revenue from Dutch users aged 55 and older in December 2024?\n**Answer:** select sum(s.revenue) as total_revenue from ecommerce.sessions as s inner join ecommerce.users as u on s.user_id = u.user_id where u.country = &#039;Netherlands&#039; and u.age &gt;= 55 and toStartOfMonth(s.action_date) = &#039;2024-12-01&#039; format TabSeparatedWithNames\n\n**Question:** What are the median and interquartile range (IQR) of purchase revenue for each country?\n**Answer:** select country, median(revenue) as median_revenue, quantile(0.25)(revenue) as q25_revenue, quantile(0.75)(revenue) as q75_revenue from ecommerce.sessions as s inner join ecommerce.users as u on u.user_id = s.user_id where revenue &gt; 0 group by country format TabSeparatedWithNames\n\n**Question:** What is the average number of days between the first session and the first purchase for users who made at least one purchase?\n**Answer:** select avg(first_purchase - first_action_date) as avg_days_to_purchase from (select user_id, min(action_date) as first_action_date, minIf(action_date, revenue &gt; 0) as first_purchase, max(revenue) as max_revenue from ecommerce.sessions group by user_id) where max_revenue &gt; 0 format TabSeparatedWithNames\n\n**Question:** What is the number of sessions in December 2024, broken down by operating systems, including the totals?\n**Answer:** select os, uniqExact(session_id) as session_count from ecommerce.sessions where toStartOfMonth(action_date) = &#039;2024-12-01&#039; group by os with totals format TabSeparatedWithNames\n\n**Question:** Do we have customers who used multiple browsers during 2024? If so, please calculate the number of customers for each combination of browsers.\n**Answer:** select browsers, count(*) as customer_count from (select user_id, arrayStringConcat(arraySort(groupArray(distinct browser)), &#039;, &#039;) as browsers from ecommerce.sessions where toStartOfYear(action_date) = &#039;2024-01-01&#039; group by user_id) group by browsers order by customer_count desc format TabSeparatedWithNames\n\n**Question:** Which browser has the highest share of fraud users?\n**Answer:** select browser, 100 * uniqExactIf(user_id, is_fraud = 1) \/ uniqExact(user_id) as fraud_rate from ecommerce.sessions group by browser order by fraud_rate desc limit 1 format TabSeparatedWithNames\n\n**Question:** Which country had the highest number of first-time users in 2024?\n**Answer:** select country, count(distinct user_id) as new_users from (select user_id, min(action_date) as first_date from ecommerce.sessions group by user_id having toStartOfYear(first_date) = &#039;2024-01-01&#039;) as t inner join ecommerce.users as u on t.user_id = u.user_id group by country order by new_users desc limit 1 format TabSeparatedWithNames\n\n---\n\n**Your Task:** Using all the provided information above, write a ClickHouse SQL query to answer the following customer question: \n{question}\n&quot;&quot;&quot;\n\nwith ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:\n  agent = ToolCallingAgent(tools=[*tool_collection.tools], model=model)\n  prompt = CLICKHOUSE_PROMPT_TEMPLATE.format(\n      question = &#039;How many customers did we have in May 2024?&#039;\n  )\n  response = agent.run(prompt)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">As a result, we received the correct answer.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/07\/12m1sKkzkno94bQIkGkw2GA.png\" alt=\"\" class=\"wp-image-608364\"\/><figcaption class=\"wp-element-caption\">Image by author<\/figcaption><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">If you don\u2019t need much customisation or integration with prompts and resources, this implementation is definitely the way to go.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Summary<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">In this article, we built a chatbot that integrates with MCP servers and leverages all the benefits of standardisation to access tools, prompts, and resources seamlessly.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">We started with a basic implementation capable of listing and accessing MCP capabilities. Then, we enhanced our chatbot with a smart feature that suggests relevant prompt templates to users based on their input. This makes our product more intuitive and user-friendly, especially for users unfamiliar with the complete library of available prompts.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">To implement our chatbot, we used relatively low-level code, giving you a better understanding of how the MCP protocol works under the hood and what happens when you use AI tools like Claude Desktop or Cursor.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">As a bonus, we also discussed the smolagents implementation that lets you quickly deploy an MCP client integrated with tools.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\"><em>Thank you for reading. I hope this article was insightful. Remember Einstein\u2019s advice: \u201cThe important thing is not to stop questioning. Curiosity has its own reason for existing.\u201d May your curiosity lead you to your next great insight.<\/em><\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Reference<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">This article is inspired by the <a href=\"https:\/\/www.deeplearning.ai\/short-courses\/ai-agents-in-langgraph\/\" rel=\"noreferrer noopener\" target=\"_blank\"><em>\u201c<\/em><\/a><a href=\"https:\/\/www.deeplearning.ai\/short-courses\/mcp-build-rich-context-ai-apps-with-anthropic\/\" rel=\"noreferrer noopener\" target=\"_blank\"><em>MCP: Build Rich-Context AI Apps with Anthropic<\/em><\/a><a href=\"https:\/\/www.deeplearning.ai\/short-courses\/ai-agents-in-langgraph\/\" rel=\"noreferrer noopener\" target=\"_blank\"><em>\u201d<\/em><\/a> short course from <em>DeepLearning.AI.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Understanding all the details of the model context\u00a0protocol<\/p>\n","protected":false},"author":18,"featured_media":606553,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"is_member_only":false,"sub_heading":"Understanding all the details of the model context\u00a0protocol","footnotes":""},"categories":[17],"tags":[11899,468,465,11555,32614],"sponsor":[],"coauthors":[30132],"class_list":["post-606552","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","tag-agentic-ai","tag-deep-dives","tag-llm","tag-llm-agent","tag-mcp"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Building a \u0421ustom MCP\u00a0Chatbot | Towards Data Science<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/towardsdatascience.com\/building-a-\u0441ustom-mcp-chatbot\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Building a \u0421ustom MCP\u00a0Chatbot | Towards Data Science\" \/>\n<meta property=\"og:description\" content=\"Understanding all the details of the model context\u00a0protocol\" \/>\n<meta property=\"og:url\" content=\"https:\/\/towardsdatascience.com\/building-a-\u0441ustom-mcp-chatbot\/\" \/>\n<meta property=\"og:site_name\" content=\"Towards Data Science\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-10T19:53:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-10T19:53:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1-1024x582.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"582\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Mariya Mansurova\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:site\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mariya Mansurova\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/\"},\"author\":{\"name\":\"TDS Editors\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\"},\"headline\":\"Building a \u0421ustom MCP\u00a0Chatbot\",\"datePublished\":\"2025-07-10T19:53:09+00:00\",\"dateModified\":\"2025-07-10T19:53:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/\"},\"wordCount\":2144,\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png\",\"keywords\":[\"Agentic Ai\",\"Deep Dives\",\"Llm\",\"Llm Agent\",\"mcp\"],\"articleSection\":[\"Artificial Intelligence\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/\",\"url\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/\",\"name\":\"Building a \u0421ustom MCP\u00a0Chatbot | Towards Data Science\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png\",\"datePublished\":\"2025-07-10T19:53:09+00:00\",\"dateModified\":\"2025-07-10T19:53:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png\",\"width\":2560,\"height\":1456,\"caption\":\"Image generated by the author with DALL-E 3\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/towardsdatascience.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Building a \u0421ustom MCP\u00a0Chatbot\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/towardsdatascience.com\/#website\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"name\":\"Towards Data Science\",\"description\":\"Publish AI, ML &amp; data-science insights to a global community of data professionals.\",\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"alternateName\":\"TDS\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/towardsdatascience.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/towardsdatascience.com\/#organization\",\"name\":\"Towards Data Science\",\"alternateName\":\"TDS\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"width\":696,\"height\":696,\"caption\":\"Towards Data Science\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/TDataScience\",\"https:\/\/www.youtube.com\/c\/TowardsDataScience\",\"https:\/\/www.linkedin.com\/company\/towards-data-science\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\",\"name\":\"TDS Editors\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"caption\":\"TDS Editors\"},\"description\":\"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds\",\"url\":\"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Building a \u0421ustom MCP\u00a0Chatbot | Towards Data Science","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/towardsdatascience.com\/building-a-\u0441ustom-mcp-chatbot\/","og_locale":"en_US","og_type":"article","og_title":"Building a \u0421ustom MCP\u00a0Chatbot | Towards Data Science","og_description":"Understanding all the details of the model context\u00a0protocol","og_url":"https:\/\/towardsdatascience.com\/building-a-\u0441ustom-mcp-chatbot\/","og_site_name":"Towards Data Science","article_published_time":"2025-07-10T19:53:09+00:00","article_modified_time":"2025-07-10T19:53:27+00:00","og_image":[{"width":1024,"height":582,"url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1-1024x582.png","type":"image\/png"}],"author":"Mariya Mansurova","twitter_card":"summary_large_image","twitter_creator":"@TDataScience","twitter_site":"@TDataScience","twitter_misc":{"Written by":"Mariya Mansurova","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#article","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/"},"author":{"name":"TDS Editors","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee"},"headline":"Building a \u0421ustom MCP\u00a0Chatbot","datePublished":"2025-07-10T19:53:09+00:00","dateModified":"2025-07-10T19:53:27+00:00","mainEntityOfPage":{"@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/"},"wordCount":2144,"publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"image":{"@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png","keywords":["Agentic Ai","Deep Dives","Llm","Llm Agent","mcp"],"articleSection":["Artificial Intelligence"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/","url":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/","name":"Building a \u0421ustom MCP\u00a0Chatbot | Towards Data Science","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage"},"image":{"@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png","datePublished":"2025-07-10T19:53:09+00:00","dateModified":"2025-07-10T19:53:27+00:00","breadcrumb":{"@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#primaryimage","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-05-at-21.33.46-scaled-1.png","width":2560,"height":1456,"caption":"Image generated by the author with DALL-E 3"},{"@type":"BreadcrumbList","@id":"https:\/\/towardsdatascience.com\/building-a-%d1%81ustom-mcp-chatbot\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/towardsdatascience.com\/"},{"@type":"ListItem","position":2,"name":"Building a \u0421ustom MCP\u00a0Chatbot"}]},{"@type":"WebSite","@id":"https:\/\/towardsdatascience.com\/#website","url":"https:\/\/towardsdatascience.com\/","name":"Towards Data Science","description":"Publish AI, ML &amp; data-science insights to a global community of data professionals.","publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"alternateName":"TDS","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/towardsdatascience.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/towardsdatascience.com\/#organization","name":"Towards Data Science","alternateName":"TDS","url":"https:\/\/towardsdatascience.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","width":696,"height":696,"caption":"Towards Data Science"},"image":{"@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/TDataScience","https:\/\/www.youtube.com\/c\/TowardsDataScience","https:\/\/www.linkedin.com\/company\/towards-data-science\/"]},{"@type":"Person","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee","name":"TDS Editors","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac","url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","caption":"TDS Editors"},"description":"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds","url":"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/"}]}},"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"TDS Contributor Portal","distributor_original_site_url":"https:\/\/contributor.insightmediagroup.io","push-errors":false,"_links":{"self":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606552","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/comments?post=606552"}],"version-history":[{"count":0,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606552\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media\/606553"}],"wp:attachment":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media?parent=606552"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/categories?post=606552"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/tags?post=606552"},{"taxonomy":"sponsor","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/sponsor?post=606552"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/coauthors?post=606552"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}