transport, MCP, servers, programming
I would like to expose one more benefit of the Model Context Protocol (MCP) — the ability to easily change the transport protocol. There are three different transport protocols available now, and each has its own benefits and drawbacks.
However, if an MCP server is implemented properly using a good SDK, then switching to another transport protocol is easy.
## Quick Recap: What is MCP?
* **Model Context Protocol (MCP)** is a new standard for integrating external tools with AI chat applications. For example, you can add Google Search as an MCP server to Claude Desktop, allowing the
Continue Reading ...
In recent months, the Model Context Protocol (MCP) has gained a lot of traction as a powerful foundation for building AI assistants. While many developers are familiar with its core request-response flow, there's one feature that I believe remains underappreciated: the ability of MCP servers to send **notifications to clients**.
Let’s quickly recap the typical flow used by most MCP-based assistants:
* A user sends a prompt to the assistant.
* The assistant attaches a list of available tools and forwards the prompt to the LLM.
* The LLM generates a response, possibly requesting the
Continue Reading ...
Recently, I introduced the idea of using MCP (Model Context Protocol) to implement memory for AI chats and assistants. The core concept is to separate the assistant's memory from its core logic, turning it into a dedicated MCP server.
If you're unfamiliar with this approach, I suggest reading my earlier article: [Benefits of Using MCP to Implement AI Chat Memory](/blog/post/benefits-of-using-mcp-to-implement-ai-chat-memory/).
## What Do I Mean by “AI Chat”?
In this context, an "AI Chat" refers to an AI assistant that uses a chat interface, with an LLM (Large Language Model) as it
Continue Reading ...
I'm excited to introduce a new package for Go developers: [**CleverChatty**](https://github.com/Gelembjuk/cleverchatty).
**CleverChatty** implements the core functionality of an AI chat system. It encapsulates the essential business logic required for building AI-powered assistants or chatbots — all while remaining independent of any specific user interface (UI).
In short, **CleverChatty** is a fully working AI chat backend — just without a graphical UI. It supports many popular LLM providers, including OpenAI, Claude, Ollama, and others. It also integrates with external tools us
Continue Reading ...
memory, AI, conversational agents
Implementing memory for AI assistants or conversational AI tools remains a complex engineering challenge. Large Language Models (LLMs) like ChatGPT are stateless by design—they only retain knowledge up to their training cutoff and do not inherently remember past interactions. However, for a seamless and context-aware user experience, it’s crucial for AI chat tools to recall previous conversations, preferences, and relevant history.
To address this gap, different vendors have developed their own proprietary solutions for integrating memory. For example, OpenAI’s ChatGPT has built-in
Continue Reading ...
In this post, I’d like to share some thoughts on the **Model Context Protocol (MCP)** and compare two types of server integration methods it supports—**STDIO** and **SSE**, especially from the security perspective.
## Quick Recap: What is MCP?
- **Model Context Protocol (MCP)** is a new standard for integrating external tools with AI chat applications. For example, you can add Google Search as an MCP server to Claude Desktop, allowing the LLM to perform live searches to improve its responses. In this case, Claude Desktop is the *MCP Host*.
There are two common types of MCP serv
Continue Reading ...