← Back to News

Custom MCP Catalogs and Profiles: Advancing Enterprise MCP Adoption

The Model Context Protocol (MCP) has quietly become essential infrastructure for connecting AI applications to custom tools and data sources. Docker’s announcement of Custom Catalogs and Profiles moving to general availability addresses a real pain point: how do enterprises standardize, distribute, and manage MCP servers at scale? If you’ve been experimenting with MCP servers locally, you’ve probably packaged them ad-hoc—copying configurations, managing dependencies, and hoping everything works across different environments. Custom Catalogs and Profiles solve this by providing a structured way to package and distribute MCP tooling across your organization, similar to how you might manage container registries or package repositories.

Here’s how this works technically. Custom MCP Catalogs function as curated collections of MCP server configurations that your organization can create, version, and distribute. Rather than each developer manually configuring MCP servers, a catalog lets your platform team define approved servers with standardized configurations, environment variables, and resource constraints—then publish that catalog to your team. MCP Profiles let individual developers select which servers from your catalog they need and customize them for their specific use case. Think of it like this: a catalog is your organization’s approved list of database connectors, APIs, and custom tools, while a profile is what an individual developer activates for their Claude session or AI application. The configuration typically lives in a manifest file that can be versioned in Git, making it reproducible and auditable.

The practical benefits become clear in enterprise scenarios. A financial services team might create a catalog with approved connectors to internal compliance databases, market data APIs, and calculation tools—then enforce that all AI assistants use only these pre-vetted integrations. A software development team could catalog their internal documentation servers, git repositories, and deployment APIs, letting engineers quickly spin up an AI assistant with the right context without manual setup. From a governance perspective, teams get visibility into which tools AI applications can access, which is increasingly important for compliance and security reviews. DevOps teams benefit too: instead of troubleshooting individual MCP configurations across dozens of developers, they manage one catalog and ensure consistency.

If you’re building AI applications in your organization, this pattern is worth understanding. You’re likely already thinking about standardization—whether that’s approved libraries, shared APIs, or container images. MCP Catalogs and Profiles extend that discipline to AI tooling. The immediate practical step is evaluating whether your current MCP experiments could benefit from centralization. Are you running the same MCP servers across multiple projects? Do you want to restrict which external tools your AI applications can access? Do you need reproducible configurations for different environments? If you answered yes to any of these, Custom Catalogs and Profiles will simplify your setup significantly. Start small: catalog your most commonly-used MCP servers and create a profile for your team’s standard development environment, then expand from there.

Source
↗ Docker