Easily Switch Transport Protocols in MCP Servers

Easily Switch Transport Protocols in MCP Servers

I would like to expose one more benefit of the Model Context Protocol (MCP) — the ability to easily change the transport protocol. There are three different transport protocols available now, and each has its own benefits and drawbacks.

However, if an MCP server is implemented properly using a good SDK, then switching to another transport protocol is easy.

Quick Recap: What is MCP?

  • Model Context Protocol (MCP) is a new standard for integrating external tools with AI chat applications. For example, you can add Google Search as an MCP server to Claude Desktop, allowing the LLM to perform live searches to improve its responses. In this case, Claude Desktop is the MCP Host.

There are three common types of MCP server transports:

  • STDIO Transport: The MCP server runs locally on the same machine as the MCP Host. Users download a small application (the MCP server), install it, and configure the MCP Host to communicate with it via standard input/output.

  • SSE Transport: The MCP server runs as a network service, typically on a remote server (but it can also be on localhost). It's essentially a special kind of website that the MCP Host connects to via Server-Sent Events (SSE).

Continue Reading ...

An Underrated Feature of MCP Servers: Client Notifications

An Underrated Feature of MCP Servers: Client Notifications

In recent months, the Model Context Protocol (MCP) has gained a lot of traction as a powerful foundation for building AI assistants. While many developers are familiar with its core request-response flow, there's one feature that I believe remains underappreciated: the ability of MCP servers to send notifications to clients.

Let’s quickly recap the typical flow used by most MCP-based assistants:

  • A user sends a prompt to the assistant.
  • The assistant attaches a list of available tools and forwards the prompt to the LLM.
  • The LLM generates a response, possibly requesting the use of certain tools for additional context.
  • The assistant invokes those tools and gathers their responses.
  • These tool responses are sent back to the LLM.
  • The LLM returns a final answer, which the assistant presents to the user.

This user-initiated flow is incredibly effective—and it’s what powers many AI assistants today.

However, MCP also supports a less obvious but equally powerful capability: tool-initiated communication. That is, tools can trigger actions that cause the MCP server to send real-time notifications to the client, even when the user hasn’t sent a new prompt.

Continue Reading ...

Implementing AI Chat Memory with MCP

Implementing AI Chat Memory with MCP

Recently, I introduced the idea of using MCP (Model Context Protocol) to implement memory for AI chats and assistants. The core concept is to separate the assistant's memory from its core logic, turning it into a dedicated MCP server.

If you're unfamiliar with this approach, I suggest reading my earlier article: Benefits of Using MCP to Implement AI Chat Memory.

What Do I Mean by “AI Chat”?

In this context, an "AI Chat" refers to an AI assistant that uses a chat interface, with an LLM (Large Language Model) as its core, and supports calling external tools via MCP. ChatGPT is a good example.

Throughout this article, I’ll use the terms AI Chat and AI Assistant interchangeably.

Continue Reading ...

Introducing CleverChatty – An AI Assistant Package for Go 🤖🐹

Introducing CleverChatty – An AI Assistant Package for Go 🤖🐹

I'm excited to introduce a new package for Go developers: CleverChatty.
CleverChatty implements the core functionality of an AI chat system. It encapsulates the essential business logic required for building AI-powered assistants or chatbots — all while remaining independent of any specific user interface (UI).

In short, CleverChatty is a fully working AI chat backend — just without a graphical UI. It supports many popular LLM providers, including OpenAI, Claude, Ollama, and others. It also integrates with external tools using the Model Context Protocol (MCP).


Continue Reading ...

Benefits of Using MCP to Implement AI Chat Memory

Benefits of Using MCP to Implement AI Chat Memory

Implementing memory for AI assistants or conversational AI tools remains a complex engineering challenge. Large Language Models (LLMs) like ChatGPT are stateless by design—they only retain knowledge up to their training cutoff and do not inherently remember past interactions. However, for a seamless and context-aware user experience, it’s crucial for AI chat tools to recall previous conversations, preferences, and relevant history.

To address this gap, different vendors have developed their own proprietary solutions for integrating memory. For example, OpenAI’s ChatGPT has built-in memory capabilities, and other platforms like Anthropic’s Claude (including the Claude Desktop application) offer similar features. Each of these implementations is unique, often tied closely to the platform’s internal architecture and APIs.

This fragmented landscape raises an important question: what if we had a standardized way to implement memory for AI assistants?

Model Context Protocol (MCP) was originally designed to provide a standard way to integrate external tools with large language models (LLMs). But this same concept could inspire a standardized approach to implementing memory in AI chat systems. Instead of inventing something entirely new, perhaps we can extend or repurpose MCP to serve this function as well.

Continue Reading ...

Which MCP Server Transport is Better? Comparing STDIO and SSE

Which MCP Server Transport is Better? Comparing STDIO and SSE

In this post, I’d like to share some thoughts on the Model Context Protocol (MCP) and compare two types of server integration methods it supports—STDIO and SSE, especially from the security perspective.

Quick Recap: What is MCP?

  • Model Context Protocol (MCP) is a new standard for integrating external tools with AI chat applications. For example, you can add Google Search as an MCP server to Claude Desktop, allowing the LLM to perform live searches to improve its responses. In this case, Claude Desktop is the MCP Host.

There are two common types of MCP server transports:

  • STDIO Transport: The MCP server runs locally on the same machine as the MCP Host. Users download a small application (the MCP server), install it, and configure the MCP Host to communicate with it via standard input/output.

  • SSE Transport: The MCP server runs as a network service, typically on a remote server (but it can also be on localhost). It's essentially a special kind of website that the MCP Host connects to via Server-Sent Events (SSE).

Continue Reading ...