In principle, you Rust could create something like std::num::NonZero and its corresponding sealed trait ZeroablePrimitive to mark that two bits are unused. But that doesn't exist yet as far as I know.
In principle, you Rust could create something like std::num::NonZero and its corresponding sealed trait ZeroablePrimitive to mark that two bits are unused. But that doesn't exist yet as far as I know.
"What is MCP, what does it bring to the table? Who knows. What does it do? The LLM stuff! Pay us $10 a month thanks!"
LLM's have function / tool calling built into them. No major models have any direct knowledge of MCP.
Not only do you not need MCP, but you should actively avoid using it.
Stick with tried and proven API standards that are actually observable and secure and let your models/agents directly interact with those API endpoints.
For instance, you can put a thousand temperature sensors in a room, which give you 1000 temperature readouts. But all these temperature sensors are correlated, and if you project them down to latent space (using PCA or PLS if linear, projection to manifolds if nonlinear) you’ll create maybe 4 new latent variables (which are usually linear combinations of all other variables) that describe all the sensor readings (it’s a kind of compression). All you have to do then is control those 4 variables, not 1000.
In the chemical space, there are thousands of possible combinations of process conditions and mixtures that produce certain characteristics, but when you project them down to latent variables, there are usually less than 10 variables that give you the properties you want. So if you want to create a new chemical, all you have to do is target those few variables. You want a new product with particular characteristics? Figure out how to get < 10 variables (not 1000s) to their targets, and you have a new product.
Everything else I've open-sourced has gone pretty well, comparatively.
One of my suggestions was that they include hash tables, rather than rely on records (linked lists with named key). Got flamed as ignorant, and I've never emailed that mailing list again. A while later, they ended up adding hash tables to the language.
Why did Google published the Transformer architecture instead of keeping it to themselves?
I understand that people may want to do good things for humanity, facilitate progress, etc. But if an action goes against commercial interest, how can the company management take it and not get objections from shareholders?
Or there is a commercial logic that motivates sharing of information and intellectual property? What logic is that?
1. Goodwill and mindshare. If you're known as "the best" or "the most innovative", then you'll attract customers.
2. Talent acquisition. Smart people like working with smart people.
3. Becoming the standard. If your technology becomes widely adopted, and you've been using it the longest, then you're suddenly be the best placed in your industry to make use of the technology while everyone retools.
4. Deception. Sometimes you publish work that's "old" internally but is still state of the art. This provides your competition with a false sense of where your research actually is.
5. Freeride on others' work. Maybe experimenting with extending an idea is too expensive/risky to fund internally? Perhaps a wave of startups will try. Acquire one of them that actually makes it work.
6. Undercut the market leader. If your industry has a clear market leader, the others can use open source to cooperate to erode that leadership position.
Describing C as "high-level" seems like deliberate abuse of the term. The virtual machine abstraction doesn't imply any benefits to the developer.
C has always been classed as a high level language since its inception. That term's meaning has shifted though. When C was created, it wasn't assembly (middle) or directly writing CPU op codes in binary/hex (low level).
I'm shocked that the original post being referred to made this mistake. I recently implemented Postgres FTS in a personal project, and did so by just reading the Postgres documentation on FTS following the instructions. The docs lead you through the process of creating the base unoptimized case, and then optimising it, explaining the purpose of each step and why it's faster. It's really clear that is what it's doing, and I could only assume that someone making this mistake is either doing so to intentionally misrepresent Postgres FTS, or because they haven't read the basic documentation.
One thing that's not mentioned here, but something that I took away from Wolfram's obituary of Lenat (https://writings.stephenwolfram.com/2023/09/remembering-doug...) was that Lenat was very easily distracted ("Could we somehow usefully connect [Wolfram|Alpha and the Wolfram Language] to CYC? ... But when I was at SXSW the next year Doug had something else he wanted to show me. It was a math education game.").
My armchair diagnosis is untreated ADHD. He might have had had discussing the internals of CYC on his todo list since its first prototype, but the draft was never ready.
I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal.