diff --git "a/2025/mcp-dev-summit-may/presentation.html" "b/2025/mcp-dev-summit-may/presentation.html" new file mode 100644--- /dev/null +++ "b/2025/mcp-dev-summit-may/presentation.html" @@ -0,0 +1,382 @@ +Tools
Resources: Building the Next Wave of MCP Apps
+
MCP Developers Summit, San Francisco
+
Shaun Smith
+
May 2025
+
llmindset.co.uk github.com/evalstate x.com/llmindsetuk
+
+
+

Tools

+
+
+

LLMs are good at ad-hoc integration

+

Simple for MCP Server Developers to add to Context with Tool Descriptions and Results

+

Relatively easy for MCP Client Applications to implement - symmetry with Chat Completion APIs

+
+
+
+ +

+
+
+
+
+
+

Tools require minimal coordination between MCP Server Developers and Client Application builders.

+
+
+

Tools don't solve the Context problem

+
+
+

Tool Descriptions, Calls and Results are consume context.

+

Even simple descriptions can easily consume 100-200 tokens per Tool.

+

LLMs can Get Lost in Multi-turn Conversation

+
+
+ +
    +
  1. find papers on the kazakh language
  2. +
  3. find related models
  4. +
  5. find related spaces
  6. +
+
"Concise Scenario"
+
+MCP Server (HF Search), 4 tools
+
+"Interfere Scenario"
+
+MCP Server (HF Search), 4 tools
+MCP Official Filesystem - 11 tools
+MCP Official Github - 40 tools
+
+
+
+ +

+
+
+

Sonnet 3.7: 20 Runs - 47% longer, 89% more tokens

+
+

+
+
+
+
+
+

+
+
+
+

...we discover that when LLMs take a wrong turn in a conversation, they get lost and do not recover.

+

https://hf.co/papers/2505.06120

+
+
+
+

Tools as a Conversation Turn

+

Tool results aren't like traditional API responses: they become the next turn in a conversation with the Assistant.

+
+
+

Some UX patterns emerging

+
    +
  • +

    Github/VSCode: Dynamic Tool Bouquet

    +
  • +
  • +

    Claude.ai: Selectable Tools

    +
  • +
+
+
+
+

+
+
+
+

We need to think about semantics not just data transfer...

+
+
+
+
+

What do I do with this content?

+

Client apps ask:

+

Can I render it?

+
    +
  • +

    Is this for the Human to see?

    +
  • +
  • +

    Is this for the LLM to process?

    +
  • +
+

Can/Should I transform it?

+

Most content will be sent via the LLM API as text/plain anyway...

+

Resources gives us semantics

+
+
+
+

+
+
+
+
+
+
+
+

A Closer look at Resources

+

Is Textual or Binary data

+

May suggest an audience

+

Has a MIME Type and uri

+

Embeddable in Prompts and Tool Results

+
+
+
+

+
+
+
+
+
+
+

Prompts - User Intent

+

Not just instruction templating: a precise way to inject task context. Especially helpful if we know User task intent.

+

Can contain User/Assistant conversation pairs and Resources.

+
+
+
+

+
+
+
+
+
+
+
+

Using and Embedding Resources

+

Prompts and Resources allow me to compose sophisticated tasks and workflows.

+

Mime Types allow the Client Application to intelligently handle data

+
+
+
+

+
+
+
+

For example

+

MCP Servers that provide language specific capabilities such as Linting information as subscribable resources.

+
+
+

Example - In-Context Learning for Code Maintenance

+
+
+

fast-agent comes bundles with a prompt-server to make trying these ideas easy.

+

In this example, we use in-context learning to load a sample input file and "teach" the LLM how to handle the next file in the conversation.

+
+
+
---USER
+Can you refactor this code to follow best practices?
+
+---RESOURCE
+poor_input.ts
+
+---ASSISTANT
+Here's the refactored code:
+- Changed let to const
+- Added type annotations
+- Used 2-space indentation
+- Added explicit return type
+
+const userName = "John";
+function getUser(id: string): User {
+  return database.find(id);
+}
+
+---USER
+Can you refactor this code?
+
+---RESOURCE
+legacy_code.ts
+
+
+
+
+
+

Resource Features and Sampling

+
+
+

🔄 Subscribable

+

weather://san-francisco/current
+→ Updates every 10 minutes (trigger new generation)

+

typescript://project/errors
+→ Mutate context with current status feedback

+

🔍 Discoverable

+

Completions help users find available resources for interactive use.

+
+
+

🎯 Templated

+

calendar://meetings/{date}
+→ Fetch any day's meetings

+

github://issues/{repo}/{number}
+→ Access any issue dynamically
+

+

⚡ Sampling

+

Sampling can summarise or pre-process upstream content optimize main conversation thread.
+hub://papers/{number}/auto-summary

+
+
+
+
+

URI schemes enable MCP Server Developers and Client Application builders to coordinate expectations and share understanding

+
+
+

Sharing Semantic Expectations

+

URI schemes allow MCP Server developers and Client Application builders to share semantic contracts with each other.

+
+
+
+
// HOME AUTOMATION
+mcp://homeauto/sensors/list
+mcp://sensor/temperature/living-room
+mcp://sensor/motion/front-door
+mcp://sensor/camera/mcp-webcam
+
+search://papers?topic=llm&limit=10
+
+
+
+

Client knows:

+
    +
  • How to display sensor data
  • +
  • How to subscribe to updates
  • +
  • What UI components to show
  • +
  • Aggregate common MCP Server capabilities
  • +
+
+
+
+
+

Example: Rich UI Client Libraries

+
+
+ +
+
+

mcp-ui by Ido Salomon (github.com/idosal)

+

Demonstrates a ui:// URI scheme for returning rich HTML content for interaction.

+

Includes both Client and Server SDKs and reference examples.

+

The same principle can apply to any number of domains (CRM, Home Automation, Platform Integration...)

+
+
+

+
+
+

URI Schemes

+

Using known URI schemes, MCP Server and Host Application Developers can share:

+
    +
  • +

    Knowledge of Schemas

    +
  • +
  • +

    Expected Hierarchy of Resources and Templates

    +
  • +
  • +

    SDKs to accelerate rich integration of advanced features.

    +
  • +
+

This is useful for both generic applications (e.g. User Interaction Patterns, Sorting/Filtering datasets) as well as Domain Specific implementations.

+
+
+

Summary

+

Resources provide semantics, not just data transfer - they tell us what content is, who it's for, and how to handle it

+

URI schemes enable coordination between MCP Server developers and - Client apps - sharing semantic contracts for advanced features

+

Prompts enable sophisticated task composition - especially powerful when we know User/Agent task intent (assembling resources, in-context learning)

+

Optimize tools for actions, Resources for context - consider fallback modes for compatibility across different clients

+

https://modelcontextprotocol-community.github.io/working-groups/

+
+

github.com/evalstate : x.com/llmindsetuk

+
+
+
+
    +
  1. LLMs Get Lost In Multi-Turn Conversation : https://hf.co/papers/2505.06120
  2. +
  3. mcp-ui https://github.com/idosal/mcp-ui
  4. +
  5. MCP community Working Groups https://modelcontextprotocol-community.github.io/working-groups/
  6. +
  7. Client/Server Content capabilities https://github.com/modelcontextprotocol/modelcontextprotocol/pull/223
  8. +
  9. Prompts for in-context learning: https://x.com/llmindsetuk/status/1899148877787246888
  10. +
  11. PulseMCP Video demonstrating ICL, Dynamic Resource Generation and prompt-server: https://www.youtube.com/watch?v=MvFIo-qSwLU
  12. +
+
+
+

Thank you

+
+

this was looped in the original presentation

```typescript +interface Resource { + +---- Resource and EmbeddedResource + // What is it? + uri: "search://papers/quantum-2024/results" + + // How do I handle it? + mimeType: "application/json" + + // Who is it for? + audience?: ["assistant"] | ["user"] | ["user", "assistant"] + + +---- Resource Only + // How should I show it? + name: "2024 Quantum Theory.json" + + // How should I describe it to the Assistant? + description: "This is the JSON results associated with + the 2024 Quantum Theory Results" + +} +```

\ No newline at end of file