redhale 7 hours ago

An I losing my mind? I feel like we're making things harder for ourselves. My mantra is: just use CLI tools. Or maybe "CLI tools are all you need"?

"--help" mechanics are already built in so the agent can discover what commands are available and how to use them. The agent can write shell scripts to wrap recurring uses or sequences, and then invoke those via CLI. And there are tons of well-tested existing CLI tools available.

I feel like this article being written with the assumption of MCP forces it to overcomplicate the issue.

Just one example: the Atlassian CLI works much more reliably (generally and as an agent tool) than the Atlassian MCP server. For example, the Atlassian MCP server has weird auth issues ask the time where it will just fail to auth in a session. Atlassian publishes both officially,

cjonas 17 hours ago

"Code interpreters" are incredibly powerful tools for agents as it allows them to process large amounts of data without actually having to move the tokens through it's context window.

However, I don't actually see what any of this has to do with MCP. It's more-so just tool calling + code interpreter design patterns. If anything, the MCP hype has resulted in a lot of BAD tools being written that return ridiculous number of tokens.

SQL is really a perfect solution for allowing the agent to access data, but in most applications, it's not realistic to provide it with a db connection. Your either need RLS and user connection pooling (like supabase) or strict application tenant filtering (which is tricky) and even then, you still can't efficiently join data from multiple sources.

I recently built a system using tenant isolated S3 "dataponds", populated with parquet files, that the agent queries with duckdb.

The VM (agent core) gets an short lived "assume role" STS token injected so it can only access buckets the user is entitled to (also VPC networking so no other ingress/egress). Each location has a `manifest.json` file that describes the contents and schema. The agent can query the parquet files using duckdb, do additional processing with pandas and then share files back to user by writing it to S3 in special predefined display formats (data-table, time-series, etc).

The file ids, along with a sample of the data is pass back the LLM. It can then embed the file in it's response using a special tag and a custom markdown renderer displays it back to the user.

From what I can tell, this is basically what Chat-GPT has done for a long time (minus the preconfigured "datapond" and special response formats), but it's pretty awesome how "low-effort" a system like this can be built today.

pjmlp 10 hours ago

I see MCP with agentic runtimes the next step of Web API orchestration tools, which is kind of interesing, but also depressing in the sense we will keep writting less and less code.

I doesn't matter how great Rust happens to be over Python, if most of the modern coding will become voice/text prompts, and flow diagrams, orchestracting MCP tools.