n8n+fastgpt RAG = King Bomb !! Use the most powerful AI knowledge base MCP Server to make up for n8n’s shortcomings

In the field of AI applications, how to efficiently integrate knowledge bases to improve the quality of content generation has always been the focus of attention for technology enthusiasts and practitioners. This article will introduce an innovative solution: by combining FastGPT’s knowledge base with n8n workflows, MCP Server achieves powerful knowledge base calling capabilities, thereby addressing n8n’s shortcomings in knowledge management.

Summer vacation is coming, and my education and training business is about to enter the peak season. Xiaohongshu and official accounts will start to do summer vacation-related content

But content production has become a big problem.

How can you write articles that are not empty and have vertical business experience to attract target customers?

The answer is to use RAG: use industry experience articles to form a knowledge base, search for relevant content in the knowledge base before writing an article every time, and then produce based on these contents, which can solve the problems of AI illusion and content space to the greatest extent.

Yesterday’s article also mentioned that all my business workflows are put in n8n, but n8n does not have good knowledge base capabilities.

As mentioned in previous articles, FastGPT is currently the most suitable knowledge base for individuals or small and medium-sized teams, and its recently upgraded MCP Server can be used for external workflow calls.

Isn’t the solution here: package the knowledge base on FastGPT into MCP Server for n8n to call when producing content.

The current effect is as follows: the AI Agent in n8n will call the knowledge base workflow in FastGPT through MCP.

1. Fastgpt MCP Server

FastGPT can be installed via docker-compose, but if it is already installed, it needs to be upgraded to the latest version, which is currently 4.9.11

https://github.com/labring/FastGPT/pkgs/container/fastgpt

Upgrade FastGPT

Take the pagoda panel as an example, open docker-container orchestration

B-end product manager’s ability model and learning improvement
The first challenge faced by B-end product managers is how to correctly analyze and diagnose business problems. This is also the most difficult part, product design knowledge is basically not helpful for this part of the work, if you want to do a good job in business analysis and diagnosis, you must have a solid …

View details >

The location shown in the picture is the content of docker-compose, copy it and back it up.

Get the latest docker-compose file here:

https://github.com/labring/FastGPT/blob/main/deploy/docker/docker-compose-pgvector.yml

If you haven’t installed it yet, you can just use this to install it.

Here we can use cursor to compare the differences between the two files and make sure that our customizations are also synchronized to the new file.

For example, I set the default root password, which must also be set in the new yml file.

Then copy and paste the modified docker-compose file into the position shown in the picture.

Click “Jump to Directory” again, and then set the config.json

Download here:

https://raw.githubusercontent.com/labring/FastGPT/refs/heads/main/projects/app/data/config.json

Modify mpcServerProxyEndpoint in the diagram to your IP, which is the address that needs to be called in the n8n workflow later

Note that the 3005 interface is exposed here, and it is necessary to add relevant port opening permissions to the pagoda panel to ensure that the outside can be accessed normally.

Then put the modified config.json in the same directory as the docker-compose file of FastGPT above.

This is the path that opens after the previous “Jump to Directory”.

Finally, go back to the docker panel and click Stop-Update Image.

Open Fastgpt at this time, and you can see that it is the latest version.

Build knowledge base workflows

Note that you cannot directly use the knowledge base here, otherwise it will be the result of AI processing.

What we want is the exact relevant content matched in the original knowledge base, and we need to quote and return the knowledge base in the form of a workflow.

As shown in the figure:

After setting it, be sure to click save and publish, if you don’t publish it, the test will not return the results.

Create an MCP service

Next, on the workbench, click MCP Service – New Service

Here is a key point: if the name is in English, because n8n is a purely foreign platform, plus the AI model, it is not very sensitive to Chinese, and it is easy to fail to call it.

After creating a new one, click Start to use it and switch to SSE

The access address here is what we want to use.

2. N8N workflow call

Return to the n8n workflow at this point

This is my Xiaohongshu template workflow: Mr. selects topics in multiple directions, then generates content according to the title, and then generates multiple images according to the content, and finally saves them all to the Feishu multi-dimensional table, and then calls the matrix tool to read the Feishu table for batch publishing.

Today we are going to solve the problem of generating the body here, by accessing the knowledge base as a reference, to make the AI-generated content more effective.

In the AI Agent Tool, search for MCP and find MCP Client Tool

Also note that the n8n version is 1.94.1 or above to have MCP capabilities

Next, enter the call URL prepared in FastGPT in front of the SSE Endpoint.

Click Execute step to test it

Enter a content title

Seeing that the content is returned correctly.

Each business here has a different judgment on whether the return content is accurate, so I will hide it here.

If the specific return effect is true, you can go directly to FastGPT to process it.

Then, return to the AI Agent node and test the large model’s answers

Note that there are two main points here:

1 is that the large model must have the ability to adjust tools, here I use the API in the Magic Tower community, although Deepseek R1 0528 is strong but does not have the ability to call tools, unfortunately it cannot be used. I use Qwen3-235B-A22B

2. In the prompt, it should be clear that AI should be removed and use MCP tools to obtain reference content

Unless the MCP name and profile are well written and match the current content, most models will not actively call it.

Finally, we can see that the large model has called MCP normally.

So far, we have integrated FastGPT into the n8n workflow in the form of MCP Server.

N8n, which is already easy to use, makes up for the shortcomings of its knowledge base and is comprehensive.

That’s all for today’s sharing.

End of text
 0