Introduction to the Model Context Protocol (MCP)
Author
Igor Drobiazko
Date Published

In the rapidly evolving landscape of artificial intelligence, one of the most significant challenges has been enabling AI models to interact seamlessly with external tools, data sources, and services. Enter the Model Context Protocol (MCP) – a groundbreaking standard that's reshaping how AI systems connect with the world around them.
What is Model Context Protocol?
Model Context Protocol (MCP) is an open standard created by Anthropic to enable secure, standardized connections between AI models and external resources. Anthropic developed MCP to solve a critical challenge in the AI ecosystem: the lack of a unified way for AI assistants to interact with external tools and data sources.
Before MCP, the landscape was fragmented and inefficient. Each AI application developer had to build custom, one-off integrations for every external service they wanted their AI to use. If you wanted your AI assistant to access a database, you'd write custom database connection code. If you needed file system access, that required another custom integration. Want to connect to a specific API? Yet another bespoke implementation. This approach created several problems:
- Massive duplication of effort: Every AI application was essentially solving the same integration challenges from scratch
- Inconsistent user experiences: Different AI tools handled similar tasks in completely different ways
- High maintenance burden: When external services updated their APIs, every custom integration had to be individually updated
- Limited tool ecosystems: The high cost of building integrations meant most AI applications could only connect to a handful of external services
MCP changes this paradigm entirely. Think of it as a universal translator that allows AI assistants to communicate with databases, APIs, file systems, and other tools through a single, consistent interface. Instead of building dozens of custom integrations, developers can implement MCP once and gain access to any MCP-compatible tool or service.
Next let's explore the MCP architecture.
The Architecture Behind MCP
MCP operates on a client-server architecture that prioritizes both flexibility and security:
Servers
MCP servers act as bridges between AI models and external resources. Each server can expose multiple tools, data sources, or services through a standardized interface. For example, a database server might provide read and write capabilities for specific tables, while a file system server could offer document retrieval and manipulation functions.
Clients
On the other side, MCP clients are typically AI applications or model providers that need to access external resources. These clients can discover available servers, understand their capabilities, and make requests through the standardized protocol.
Transport Layer
MCP supports multiple transport mechanisms, including local stdio connections for desktop applications and HTTP/WebSocket connections for web-based or remote integrations. This flexibility ensures that MCP can work across different deployment scenarios.
To better understand how MCP works in practice, let's walk through a concrete example.
MCP in Action: A Practical Example
Suppose you want to provide your LLM access to your company's employee database to enable data analysis about employees - perhaps to generate reports, answer HR questions, or identify workforce trends. We'll use Claude Desktop as our LLM client and connect it to your employee database through a PostgreSQL MCP server
The Setup
In this scenario, you have:
- Claude Desktop: Acting as the MCP client that needs database access
- PostgreSQL MCP Server: A specialized PostgreSQL MCP server that provides secure access to your employee database
- Employee Database: A PostgreSQL employees database.
Database structure
We are going to create our employees database using the data from neondatabase-labs/postgres-sample-dbs. You can use your own database with your data or import the employees data for the sake of experiment. Let's have a brief look into the table structure in the employees database
1 +---------------------+ +-----------------------------+2 | department | | employee |3 |---------------------| |-----------------------------|4 ┌───────►| id (PK) |◄────────┐ | id (PK) |5 │ | dept_name (UQ) | │ | birth_date |6 │ +---------------------+ │ | first_name |7 │ │ | last_name |8 │ │ | gender |9 │ +--------------------------+ │ | hire_date |10 │ | department_employee | │ +-----------------------------+11 │ |--------------------------| │ ▲ ▲ ▲ ▲12 │ | employee_id (PK, FK) |────┘ │ │ │ │13 │ | department_id (PK, FK) | │ │ │ │14 │ | from_date |──────────┘ │ │ │15 │ | to_date | │ │ │16 │ +--------------------------+ │ │ │17 │ │ │ │18 │ +--------------------------+ │ │ │19 │ | department_manager | │ │ │20 │ |--------------------------| │ │ │21 │ | employee_id (PK, FK) | │ │ │22 └────────| department_id (PK, FK) |─────────────────┘ │ │23 | from_date | │ │24 | to_date | │ │25 +--------------------------+ +----------------+ +----------------+26 | salary | | title |27 |----------------| |----------------|28 | employee_id(FK)| | employee_id(FK)|29 | amount | | title |30 | from_date (PK) | | from_date (PK) |31 | to_date | | to_date |32 +----------------+ +----------------+33
As you can see in the diagram above, we have employees, each working in one or more departments. Some employees also manage departments. Separately, the salary table keeps track of how much each employee earned over time, while the title table records their job titles through different periods. Departments themselves are independent entries, but they connect to employees through both membership and management.
Now that we are familiar with the database structure, let's configure our Claude Desktop with a PostgreSQL MCP server.
Configuring Claude Desktop with MCP server
To configure your Claude Desktop with a MCP server, perform the following steps.
- Launch your Claude Desktop
- Go to Settings and click on the Developer tab
- Click on Edit Config which will create a file called
claude_desktop_config.json
- On macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- On Windows:
%APPDATA%/Claude/claude_desktop_config.json
- On macOS:
- Open this file and add paste the following configuration into it.
1{2 "mcpServers": {3 "postgres": {4 "command": "npx",5 "args": [6 "-y",7 "@modelcontextprotocol/server-postgres",8 "postgresql://localhost/employees"9 ]10 }11 }12}
Please note that we are using npx as command. npx is a command-line tool that allows you to execute Node.js packages without installing them globally. So, you need to install npm first. You don’t install npx separately in most cases. npx comes bundled with npm version 5.2.0 or later.
Alternatively, you can use Docker to run the MCP server. If you prefer Docker, please use this configuration.
1{2 "mcpServers": {3 "postgres": {4 "command": "docker",5 "args": [6 "run",7 "-i",8 "--rm",9 "mcp/postgres",10 "postgresql://host.docker.internal:5432/employees"]11 }12 }13}
Please note that when running Docker on MacOS, use host.docker.internal
if the server is running on the host network (eg localhost).
Now restart your Claude Desktop and go to the Settings again. If everything went okay, you should see your Postgres MCP server running, as shown in the following screenshot.

Start chatting with your database
Now comes the fun part. Let's type the following question into Claude Desktop prompt field:
Who are the top 3 highest-paid employees in my PostgreSQL database?
Allowing tools from MCP servers
Before accessing your database for the first time, Claude Desktop will ask for permission to use tools from your Postgres MCP server. You can grant permission either once or permanently.
Once you’ve granted Claude Desktop permission to access these tools, the model (Claude Sonnet 4 in this example) will immediately begin reasoning, as shown in the following screenshot.

Let’s walk through the model’s reasoning process. First, it checks which schemas are available in the database by asking the MCP server to execute this query:
1SELECT2 schema_name3FROM4 information_schema.schemata
The MCP server responds with a JSON array containing the available schema names:
1[2 {3 "schema_name": "public"4 },5 {6 "schema_name": "employees"7 },8 {9 "schema_name": "information_schema"10 },11 {12 "schema_name": "pg_catalog"13 },14 {15 "schema_name": "pg_toast"16 }17]
Seeing that the employees
schema exists, the model next determines what tables are present in it by executing:
1SELECT2 table_name3FROM4 information_schema.tables5WHERE6 table_schema = 'employees'7ORDER BY8 table_name;
The MCP server replies with a JSON array of table names:
1[2 {3 "table_name": "department"4 },5 {6 "table_name": "department_employee"7 },8 {9 "table_name": "department_manager"10 },11 {12 "table_name": "employee"13 },14 {15 "table_name": "salary"16 },17 {18 "table_name": "title"19 }20]
The model then examines the structure of the employee
and salary
tables to understand their relationship. First, it retrieves the columns in the employee
table:
1SELECT2 column_name, data_type3FROM4 information_schema.columns5WHERE6 table_schema = 'employees' AND table_name = 'employee'7ORDER BY8 ordinal_position
The response provides the columns of the employee
table:
1[2 {3 "column_name": "id",4 "data_type": "bigint"5 },6 {7 "column_name": "birth_date",8 "data_type": "date"9 },10 {11 "column_name": "first_name",12 "data_type": "character varying"13 },14 {15 "column_name": "last_name",16 "data_type": "character varying"17 },18 {19 "column_name": "gender",20 "data_type": "USER-DEFINED"21 },22 {23 "column_name": "hire_date",24 "data_type": "date"25 }26]
Next, it explores the structure of the salary
table with the following query:
1SELECT2 column_name, data_type3FROM4 information_schema.columns5WHERE6 table_schema = 'employees' AND table_name = 'salary'7ORDER BY8ordinal_position;
And receives the column details:
1[2 {3 "column_name": "employee_id",4 "data_type": "bigint"5 },6 {7 "column_name": "amount",8 "data_type": "bigint"9 },10 {11 "column_name": "from_date",12 "data_type": "date"13 },14 {15 "column_name": "to_date",16 "data_type": "date"17 }18]
Now that the model understands the tables’ structures, it identifies the top 3 highest-paid employees by joining the employee
and salary
tables and retrieving each employee’s most recent salary:
1SELECT2 e.id,3 e.first_name,4 e.last_name,5 s.amount as salary6FROM employees.employee e7JOIN employees.salary s ON e.id = s.employee_id8WHERE s.to_date = (9 SELECT MAX(to_date)10 FROM employees.salary s211 WHERE s2.employee_id = e.id12)13ORDER BY s.amount DESC14LIMIT 3;
The MCP server responds with:
1[2 {3 "id": "43624",4 "first_name": "Tokuyasu",5 "last_name": "Pesch",6 "salary": "158220"7 },8 {9 "id": "254466",10 "first_name": "Honesty",11 "last_name": "Mukaidono",12 "salary": "156286"13 },14 {15 "id": "47978",16 "first_name": "Xiahua",17 "last_name": "Whitcomb",18 "salary": "155709"19 }20]
Finally, Claude Desktop displays the answer to our question: the top 3 highest-paid employees in the database.

Isn’t this reasoning process remarkable? Watching the model think through the problem step by step gives us both transparency and confidence in its results—an invaluable tool for anyone who wants deeper insights into their data.
Conclusion
By combining LLMs with the Model Context Protocol, we’ve unlocked a new era of AI capabilities: one where large language models can securely, transparently, and intelligently interact with complex databases and external tools. Instead of opaque black-box responses, we can now watch the model reason step by step—understanding schemas, inspecting tables, and crafting precise queries—all in real time. MCP turns what used to be a fragmented, tedious integration process into a unified and streamlined experience, empowering developers and organizations to build AI systems that are not just powerful but also verifiable and maintainable. As the AI landscape continues to advance, standards like MCP will be essential for creating systems we can trust—and that can keep pace with our ever-growing data and operational needs.

Just a year ago, people were getting excited about chatbots that could write emails. Now there are autonomous agents that can plan vacations or events

When you're using AI models (like ChatGPT or Claude), pricing is often based on tokens—not on words, seconds, or API calls. This can be veryconfusing.