> According to Bloomberg and CNN, citing sources, SitusAMC sent data breach notifications to several financial giants, including JPMorgan Chase, Citigroup, and Morgan Stanley. SitusAMC also counts pension funds and state governments as customers, according to its website.
I'd imagine the same thing will happen here: It will prove more flexible to not push the model (and user) towards a UI that may not match what the user is trying to accomplish.
To me this seems like something I categorically don't want unless it is purely advisory.
In general, the only way to make sure MCPs are safe is to limit which connections are made in an enterprise setting
It would be silly to provide every employee access to GitHub, regardless of whether they need it. It’s just distracting and unnecessary risk. Yet people are over-provisioning MCPs like you would install apps on a phone.
Principle of least access applies here just as it does anywhere else.
I don't want a crisis, and if we avert one I'll happily update my beliefs. But even if the crisis comes I'll have to figure out why it has been so slow.
Thanks, but I'm hanging in to my old Subaru.
Just saw the Audi etron gt has amazing deals on used cars. Then I saw a new model coming out with better battery, more power, better range, and more features. Suddenly last year’s model is way less compelling.
The idea behind skills is sound because context management matters.
However, skills are different from MCP. Skills has nothing to do with tool calling at all!
You can implement your own version of skills easily and there is absolutely zero need for any kind of standard or a framework of sorts. They way to do is to register a tool / function to load and extend the base prompt and presto - you have implemented your own version of skills.
In ChatBotKit AI Widget we even have our own version of that for both the server and when building client-side applications.
With client-side applications the whole thing is implemented with a simple react hook that adds the necessary tools to extend the prompt dynamically. You can easily come up with your own implementation of that with 20-30 lines of code. It is not complicated.
Very often people latch on some idea thinking this is the next big thing hoping that it will explode. It is not new and it wont explode! It is just part of a suite of tools that already exist in various forms. The mechanic is so simple at its core that practically makes no sense to call it a standard and there is absolutely zero need to have it for most types of applications. It does make sense for coding assistant though as they work with quite a bit of data so there it matters. But skills are not fundamentally different from *.instruction.md prompt in Copilot or AGENT.md and its variations.
One of the best patterns I’ve see is having an /ai-notes folder with files like ‘adding-integration-tests.md’ that contain specialized knowledge suitable for specific tasks. These “skills” can then be inserted/linked into prompts where I think they are relevant.
But these skills can’t be static. For best results, I observe what knowledge would make the AI better at the skill the next time. Sometimes I ask the AI to propose new learnings to add to the relevant skill files, and I adopt the sensical ones while managing length carefully.
Skills are a great concept for specialized knowledge, but they really aren’t a groundbreaking idea. It’s just context engineering.