At larger enterprises, multiple models and agentic workflows almost always run concurrently. Data is handed off from one source to another and back again depending on the needs of the user. There was no unified protocol for these operations when LLMs first took off. Developers had to manually implement custom integrations between each part of the system.
Anthropic attempted to solve this problem in 2024 with a standardized interface for inter-AI communication: the Model Context Protocol (MCP).
Less than a year later it had reached significant adoption and Anthropic open-sourced it to encourage wider adoption. It has since been adopted by OpenAI, Google, Microsoft and GitHub. It was one of the defining stories of 2025 and drove significant AI adoption.
It aims to be the “USB-C” of AIs. It connects clients, servers, and external resources. Instead of asking an AI to write you a database query, an AI agent can now simply access the database directly via MCP acting as a middleman. Suddenly, AI agents become more agentic.
MCP’s use cases are limited only by the imagination, but who’s finding practical uses for it right now?
- Software developers are finding great use for MCP’s debugging and testing capabilities, and for writing documentation.
- Stakeholders who need internal data but were previously unable to access these systems directly.
- Customer service is finding ways to automate support, like routing users, creating tickets, and gathering context.
MCP aims to solve ambitious workflows, but smaller businesses can find value in an MCP installation, too. MCP allows for standardized interfaces between many common data sources like Google Drive, Slack, Zendesk, etc. It’s also very easy to scale, so a small initial investment can grow along with the business.
If you think your business could use technology like MCP, reach out to us and schedule a call.
