Made some 30 of them talking to the app server and all the containers inside Docker.
Now we can ask how they’re all doing and asking application-level questions about records, levels, etc., as well as system level questions like how much RAM the db server is using. Makes for a fun demo. Not sure how useful it will be in production.
Can you explain more about your setup?
I’ve been playing with something similar where I built a shared file system and a messaging system for my agents, but how are you running your AIs?
I use jan-beta GUI, you can use locally any model that supports tool calling like qwen3-30B or jan-nano. You can download and install MCP servers (from, say, mcp.so) that serve different tools for the model to use, like web search, deep research, web scrapping, download or summarize videos, etc there are hundreds of MCP servers for different use cases.
Never heard of this tool but I’ll check it out.
Mostly I’ve just been making my own dockerfiles and spinning up my own mcp instances.
They’re actually not hard to build so I’m trying to build my own for small utilities, that way I don’t get caught up in an
is_even
style dependency web.
https://context7.com/ Free API-based MCP that provides up to date code docs. Incredibly useful.
What does an MCP server do?
Basically it’s a layer to let your LLMs plug into tools.
They generally run on your machine (I use docker to sandbox them) and may or may not call out to useful APIs
One example is I just wrote one to connect to my national weather services RSS feed, so my LLMs can get and summarize the weather for me in the morning.
Works well with Gemma 3n
An MCP server can also just be an interface to something useful but simple like a calculator.