{"id":606540,"date":"2025-07-09T14:26:24","date_gmt":"2025-07-09T19:26:24","guid":{"rendered":"https:\/\/towardsdatascience.com\/?p=606540"},"modified":"2025-07-09T14:26:36","modified_gmt":"2025-07-09T19:26:36","slug":"recap-of-all-types-of-llm-agents","status":"publish","type":"post","link":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/","title":{"rendered":"Recap of all types of LLM Agents"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><mdspan datatext=\"el1751917916233\" class=\"mdspan-comment\">Intro<\/mdspan><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">At the heart of every successful AI Agent lies one essential skill: <strong>prompting<\/strong> (or &#8220;prompt engineering&#8221;). It is the method of instructing LLMs to perform tasks by carefully designing the input text.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Prompt engineering is the evolution of the inputs for the first <strong>text-to-text NLP models<\/strong> (2018). At the time, developers typically focused more on the modeling side and feature engineering. After the creation of large GPT models (2022), we all started using mostly pre-trained tools, so the focus has shifted on the input formatting. Hence, the <strong>&#8220;prompt engineering&#8221; discipline<\/strong> was born, and now (2025) it has matured into a blend of art and science, as NLP is blurring the line between code and prompt.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Different types of prompting techniques create different types of Agents. Each method enhances a specific skill: logic, planning, memory, accuracy, and Tool integration. Let&#8217;s see them all with a very simple example.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">## setup\nimport ollama\nllm = &quot;qwen2.5&quot;\n\n## question\nq = &quot;What is 30 multiplied by 10?&quot;<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Main techniques<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">1) &#8220;<strong>Regular&#8221; prompting<\/strong> &#8211; just ask a question and get a straightforward answer. Also called &#8220;Zero-Shot&#8221; prompting specifically&nbsp;when the model is given a task without any prior examples of how to solve it. This basic technique is designed for <strong>One-Step Agents<\/strong> that execute the task without intermediate reasoning, especially early models.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-151.png\" alt=\"\" class=\"wp-image-606972\" style=\"width:382px;height:auto\"\/><\/figure>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">response = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;:q}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-145.png\" alt=\"\" class=\"wp-image-606892\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">2) <a href=\"https:\/\/arxiv.org\/abs\/2210.03629\"><strong>ReAct (Reason+Act)<\/strong><\/a> &#8211; a combination of reasoning and action. The model not only thinks through a problem but also takes action based on its reasoning. So, it\u2019s more interactive as the model alternates between reasoning steps and actions, refining its approach iteratively. Basically, it\u2019s a loop of thought-action-observation. Used for <strong>more complicated tasks<\/strong>, like searching the web and making decisions based on the findings, and typically designed for <strong>Multi-Step Agents<\/strong> that perform a series of reasoning steps and actions to arrive at a final result. They can break down complex tasks into smaller, more manageable parts that progressively build upon one another.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Personally, I really like ReAct Agents as I find them more similar to humans because they \u201cf*ck around and find out\u201d just like us.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-152-715x1024.png\" alt=\"\" class=\"wp-image-606973\" style=\"width:318px;height:auto\"\/><\/figure>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">prompt = &#039;&#039;&#039;\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of &#039;Thought:&#039;, &#039;Action:&#039;, and &#039;Observation:&#039; sequences.\nAt each step, in the &#039;Thought:&#039; sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.\nThen in the &#039;Action:&#039; sequence, you shold use one of your tools.\nDuring each intermediate step, you can use &#039;Observation:&#039; field to save whatever important information you will use as input for the next step.\n&#039;&#039;&#039;\n\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;:q+&quot; &quot;+prompt}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-146-1024x284.png\" alt=\"\" class=\"wp-image-606893\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">3) <a href=\"https:\/\/arxiv.org\/abs\/2201.11903\"><strong>Chain-of-Thought (CoT)<\/strong><\/a> &#8211; a reasoning pattern that involves generating the process to reach a conclusion. The model is pushed to \u201cthink out loud\u201d by explicitly laying out the logical steps that lead to the final answer. Basically, it\u2019s a plan without feedback. CoT is the most used for <strong>advanced tasks<\/strong>, like solving a math problem that might need step-by-step reasoning, and typically designed for <strong>Multi-Step Agents<\/strong>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-153.png\" alt=\"\" class=\"wp-image-606974\" style=\"width:345px;height:auto\"\/><\/figure>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">prompt = &#039;&#039;&#039;Let\u2019s think step by step.&#039;&#039;&#039;\n\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;:q+&quot; &quot;+prompt}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-147-1024x311.png\" alt=\"\" class=\"wp-image-606894\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">CoT extensions<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">From the Chain-of-Technique derived several other new prompting approaches.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">4) <strong><a href=\"https:\/\/arxiv.org\/abs\/2303.11366\">Reflexion prompting<\/a><\/strong> that adds an iterative self-check or self-correction phase on top of the initial CoT reasoning, where the model reviews and critiques its own outputs (spotting mistakes, identifying gaps, suggesting improvements).<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">cot_answer = response[&#039;message&#039;][&#039;content&#039;]\n\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;: f&#039;&#039;&#039;Here was your original answer:\\n\\n{cot_answer}\\n\\n\n                               Now reflect on whether it was correct or if it was the best approach. \n                               If not, correct your reasoning and answer.&#039;&#039;&#039;}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-148-1024x371.png\" alt=\"\" class=\"wp-image-606895\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">5) <strong><a href=\"https:\/\/arxiv.org\/abs\/2305.10601\">Tree-of-Thoughts (ToT)<\/a><\/strong>  that generalizes CoT into a tree, exploring multiple reasoning chains simultaneously.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">num_branches = 3\n\nprompt = f&#039;&#039;&#039;\nYou will think of multiple reasoning paths (thought branches). For each path, write your reasoning and final answer.\nAfter exploring {num_branches} different thoughts, pick the best final answer and explain why.\n&#039;&#039;&#039;\n\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;: f&quot;Task: {q} \\n{prompt}&quot;}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-149-1024x552.png\" alt=\"\" class=\"wp-image-606970\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">6) <a style=\"font-weight: bold;\" href=\"https:\/\/arxiv.org\/abs\/2308.09687\">Graph\u2011of\u2011Thoughts (GoT)<\/a> that generalizes CoT into a graph, considering also interconnected branches.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">class GoT:\n    def __init__(self, question):\n        self.question = question\n        self.nodes = {}  # node_id: text\n        self.edges = []  # (from_node, to_node, relation)\n        self.counter = 1\n\n    def add_node(self, text):\n        node_id = f&quot;Thought{self.counter}&quot;\n        self.nodes[node_id] = text\n        self.counter += 1\n        return node_id\n\n    def add_edge(self, from_node, to_node, relation):\n        self.edges.append((from_node, to_node, relation))\n\n    def show(self):\n        print(&quot;\\n--- Current Thoughts ---&quot;)\n        for node_id, text in self.nodes.items():\n            print(f&quot;{node_id}: {text}\\n&quot;)\n        print(&quot;--- Connections ---&quot;)\n        for f, t, r in self.edges:\n            print(f&quot;{f} --[{r}]--&gt; {t}&quot;)\n        print(&quot;\\n&quot;)\n\n    def expand_thought(self, node_id):\n        prompt = f&quot;&quot;&quot;\n        You are reasoning about the task: {self.question}\n        Here is a previous thought node ({node_id}):\\&quot;\\&quot;\\&quot;{self.nodes[node_id]}\\&quot;\\&quot;\\&quot;\n        Please provide a refinement, an alternative viewpoint, or a related thought that connects to this node.\n        Label your new thought clearly, and explain its relation to the previous one.\n        &quot;&quot;&quot;\n        response = ollama.chat(model=llm, messages=[{&#039;role&#039;:&#039;user&#039;, &#039;content&#039;:prompt}])\n        return response[&#039;message&#039;][&#039;content&#039;]\n\n## Start Graph\ng = GoT(q)\n\n## Get initial thought\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;:q}\n])\nn1 = g.add_node(response[&#039;message&#039;][&#039;content&#039;])\n\n## Expand initial thought with some refinements\nrefinements = 1\nfor _ in range(refinements):\n    expansion = g.expand_thought(n1)\n    n_new = g.add_node(expansion)\n    g.add_edge(n1, n_new, &quot;expansion&quot;)\n    g.show()\n\n## Final Answer\nprompt = f&#039;&#039;&#039;\nHere are the reasoning thoughts so far:\n{chr(10).join([f&quot;{k}: {v}&quot; for k,v in g.nodes.items()])}\nBased on these, select the best reasoning and final answer for the task: {q}\nExplain your choice.\n&#039;&#039;&#039;\n\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;:q}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-155-1024x228.png\" alt=\"\" class=\"wp-image-606976\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">7) <strong><a href=\"https:\/\/arxiv.org\/abs\/2211.12588\">Program\u2011of\u2011Thoughts (PoT)<\/a><\/strong> that specializes in programming, where the reasoning happens via executable code snippets.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">import re\n\ndef extract_python_code(text):\n    match = re.search(r&quot;```python(.*?)```&quot;, text, re.DOTALL)\n    if match:\n        return match.group(1).strip()\n    return None\n\ndef sandbox_exec(code):\n    ## Create a minimal sandbox with safety limitation\n    allowed_builtins = {&#039;abs&#039;, &#039;min&#039;, &#039;max&#039;, &#039;pow&#039;, &#039;round&#039;}\n    safe_globals = {k: __builtins__.__dict__[k] for k in allowed_builtins if k in __builtins__.__dict__}\n    safe_locals = {}\n    exec(code, safe_globals, safe_locals)\n    return safe_locals.get(&#039;result&#039;, None)\n\nprompt = &#039;&#039;&#039;\nWrite a short Python program that calculates the answer and assigns it to a variable named &#039;result&#039;.  \nReturn only the code enclosed in triple backticks with &#039;python&#039; (```python ... ```).\n&#039;&#039;&#039;\n\nresponse = ollama.chat(model=llm, messages=[\n    {&#039;role&#039;:&#039;user&#039;, &#039;content&#039;: f&quot;Task: {q} \\n{prompt}&quot;}\n])\nprint(response[&#039;message&#039;][&#039;content&#039;])\nsandbox_exec(code=extract_python_code(text=response[&#039;message&#039;][&#039;content&#039;]))<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/image-154.png\" alt=\"\" class=\"wp-image-606975\" style=\"width:429px;height:auto\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">This article has been a tutorial to <strong>recap all the major prompting techniques for AI Agents<\/strong>. There\u2019s no single \u201cbest\u201d prompting technique as it depends heavily on the task and the complexity of the reasoning needed.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">For example, simple tasks, like <strong>summarization and translation<\/strong>, are easiy performed with a Zero-Shot\/Regular prompting, while CoT works well for <strong>math and logic<\/strong> tasks. On the other hand, <strong>Agents with Tools<\/strong> are typically created with ReAct mode. Moreover, Reflexion is most appropriate when learning from mistakes or iterations improves results, like <strong>gaming<\/strong>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">In terms of <strong>versatility <\/strong>for complex tasks,<strong> <\/strong>PoT is the real winner because it is solely based on code generation and execution. In fact, PoT Agents are getting closer to <strong>replace humans<\/strong> in several office taks.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">I believe that, in the near future, prompting won\u2019t just be about \u201cwhat you say to the model\u201d, but about orchestrating an interactive loop between human intent, machine reasoning, and external action.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Full code for this article:&nbsp;<strong><a href=\"https:\/\/github.com\/mdipietro09\/GenerativeAI\/blob\/main\/Agents_ZeroToHero\/notebook_IV_prompting.ipynb\" data-type=\"link\" data-id=\"https:\/\/github.com\/mdipietro09\/GenerativeAI\/blob\/main\/Agents_ZeroToHero\/notebook_IV_prompting.ipynb\">GitHub<\/a><\/strong><\/p>\n\n\n\n<p class=\"wp-block-paragraph\">I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.<\/p>\n\n\n\n<p class=\"has-text-align-center wp-block-paragraph\">\ud83d\udc49&nbsp;<a href=\"https:\/\/maurodp.carrd.co\/\"><strong>Let\u2019s Connect<\/strong><\/a>&nbsp;\ud83d\udc48<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img data-dominant-color=\"c1cab4\" data-has-transparency=\"false\" style=\"--dominant-color: #c1cab4;\" decoding=\"async\" src=\"https:\/\/contributor.insightmediagroup.io\/wp-content\/uploads\/2025\/06\/1LgqxDMP5qD1HE_uM33zZrg.png\" alt=\"\" class=\"wp-image-605756 not-transparent\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><sub><sup>(All images are by the author unless otherwise<\/sup><\/sub><sub><sup> noted)<\/sup><\/sub><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Regular, ReAct, Chain-of-Thought, Reflexion, ToT, GoT, PoT <\/p>\n","protected":false},"author":18,"featured_media":606541,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"is_member_only":false,"sub_heading":"Regular, ReAct, Chain-of-Thought, Reflexion, ToT, GoT, PoT ","footnotes":""},"categories":[21],"tags":[32603,12089,11555,446,656],"sponsor":[],"coauthors":[29964],"class_list":["post-606540","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-large-language-models","tag-ai-agents-2","tag-chain-of-thought","tag-llm-agent","tag-machine-learning","tag-prompt-engineering"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Recap of all types of LLM Agents | Towards Data Science<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Recap of all types of LLM Agents | Towards Data Science\" \/>\n<meta property=\"og:description\" content=\"Regular, ReAct, Chain-of-Thought, Reflexion, ToT, GoT, PoT\" \/>\n<meta property=\"og:url\" content=\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\" \/>\n<meta property=\"og:site_name\" content=\"Towards Data Science\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-09T19:26:24+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-09T19:26:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1228\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Mauro Di Pietro\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:site\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mauro Di Pietro\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\"},\"author\":{\"name\":\"TDS Editors\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\"},\"headline\":\"Recap of all types of LLM Agents\",\"datePublished\":\"2025-07-09T19:26:24+00:00\",\"dateModified\":\"2025-07-09T19:26:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\"},\"wordCount\":692,\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png\",\"keywords\":[\"AI Agents\",\"Chain Of Thought\",\"Llm Agent\",\"Machine Learning\",\"Prompt Engineering\"],\"articleSection\":[\"Large Language Models\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\",\"url\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\",\"name\":\"Recap of all types of LLM Agents | Towards Data Science\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png\",\"datePublished\":\"2025-07-09T19:26:24+00:00\",\"dateModified\":\"2025-07-09T19:26:36+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png\",\"width\":2560,\"height\":1228},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/towardsdatascience.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Recap of all types of LLM Agents\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/towardsdatascience.com\/#website\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"name\":\"Towards Data Science\",\"description\":\"Publish AI, ML &amp; data-science insights to a global community of data professionals.\",\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"alternateName\":\"TDS\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/towardsdatascience.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/towardsdatascience.com\/#organization\",\"name\":\"Towards Data Science\",\"alternateName\":\"TDS\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"width\":696,\"height\":696,\"caption\":\"Towards Data Science\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/TDataScience\",\"https:\/\/www.youtube.com\/c\/TowardsDataScience\",\"https:\/\/www.linkedin.com\/company\/towards-data-science\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\",\"name\":\"TDS Editors\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"caption\":\"TDS Editors\"},\"description\":\"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds\",\"url\":\"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Recap of all types of LLM Agents | Towards Data Science","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/","og_locale":"en_US","og_type":"article","og_title":"Recap of all types of LLM Agents | Towards Data Science","og_description":"Regular, ReAct, Chain-of-Thought, Reflexion, ToT, GoT, PoT","og_url":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/","og_site_name":"Towards Data Science","article_published_time":"2025-07-09T19:26:24+00:00","article_modified_time":"2025-07-09T19:26:36+00:00","og_image":[{"width":2560,"height":1228,"url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png","type":"image\/png"}],"author":"Mauro Di Pietro","twitter_card":"summary_large_image","twitter_creator":"@TDataScience","twitter_site":"@TDataScience","twitter_misc":{"Written by":"Mauro Di Pietro","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#article","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/"},"author":{"name":"TDS Editors","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee"},"headline":"Recap of all types of LLM Agents","datePublished":"2025-07-09T19:26:24+00:00","dateModified":"2025-07-09T19:26:36+00:00","mainEntityOfPage":{"@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/"},"wordCount":692,"publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"image":{"@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png","keywords":["AI Agents","Chain Of Thought","Llm Agent","Machine Learning","Prompt Engineering"],"articleSection":["Large Language Models"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/","url":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/","name":"Recap of all types of LLM Agents | Towards Data Science","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage"},"image":{"@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png","datePublished":"2025-07-09T19:26:24+00:00","dateModified":"2025-07-09T19:26:36+00:00","breadcrumb":{"@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#primaryimage","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/image-126-scaled-1.png","width":2560,"height":1228},{"@type":"BreadcrumbList","@id":"https:\/\/towardsdatascience.com\/recap-of-all-types-of-llm-agents\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/towardsdatascience.com\/"},{"@type":"ListItem","position":2,"name":"Recap of all types of LLM Agents"}]},{"@type":"WebSite","@id":"https:\/\/towardsdatascience.com\/#website","url":"https:\/\/towardsdatascience.com\/","name":"Towards Data Science","description":"Publish AI, ML &amp; data-science insights to a global community of data professionals.","publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"alternateName":"TDS","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/towardsdatascience.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/towardsdatascience.com\/#organization","name":"Towards Data Science","alternateName":"TDS","url":"https:\/\/towardsdatascience.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","width":696,"height":696,"caption":"Towards Data Science"},"image":{"@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/TDataScience","https:\/\/www.youtube.com\/c\/TowardsDataScience","https:\/\/www.linkedin.com\/company\/towards-data-science\/"]},{"@type":"Person","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee","name":"TDS Editors","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac","url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","caption":"TDS Editors"},"description":"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds","url":"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/"}]}},"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"TDS Contributor Portal","distributor_original_site_url":"https:\/\/contributor.insightmediagroup.io","push-errors":false,"_links":{"self":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606540","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/comments?post=606540"}],"version-history":[{"count":0,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606540\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media\/606541"}],"wp:attachment":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media?parent=606540"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/categories?post=606540"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/tags?post=606540"},{"taxonomy":"sponsor","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/sponsor?post=606540"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/coauthors?post=606540"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}