I've used it but too long ago and don't know their current state.
1) The flatbuffer parser can be configured at runtime from a schema file, so our message passing runtime does not to need to know about any schemas at build time. It reads the schema files at startup, and is henceforth capable of translating messages to and from JSON when required. It's also possible to determine that two schemas will be compatible at runtime.
2) Messages can be re-used. For our high-rate messages, we build a message and then modify it to send again, rather than building it from scratch each time.
3) Zero decode overhead - there is often no need to deserialise messages - so we can avoid copying the data therein.
The flatbuffer compiler is also extremely fast, which is nice at build time.
Gen AI exists to wrest control of information from the internet into the hands of the few. Once upon a time, the Encyclopaedia was a viable business model. It was destroyed as a business model once the internet grew to the point that a large percentage of the population was able to access it. At that point, information became free, and impossible to control.
Look at google's "AI summaries" that they've inserted at the top of their search results. Often wrong, sometimes stupid, occasionally dangerous - but think about what will happen if and when people divert their attention from "the internet" to the AI summaries of the internet. The internet as we know it, the free repository of humanity's knowledge, will wither and die.
And that is the point. The point is to once again lock up the knowledge in obscure unmodifiable black boxes, because this provides opportunity to charge for access to them. They have literally harvested the world's information, given and created freely by all of us, and are attempting to sell it back to us.
Energy use is a distraction, in terms of why we must fight Gen AI. Energy use will go down, it's an argument easily countered by the Gen AI companies. Fight Gen AI because it is an attempt to steal back what was once the property of all of us. You can't ban it, but you can and absolutely should refuse to use it.
<first stage prototyping done>
"As we grow we need to move off ROS"
<slippery market and customers require new hires and agility>
"ROS has this thing that can replace 1000 lines of your bespoke code, and it works pretty well"
Round and round and round we go. Seen it happen first hand, will see it again.
I'm not sure I get this one. When I'm learning new tech I almost always have questions. I used to google them. If I couldn't find an answer I might try posting on stack overflow. Sometimes as I'm typing the question their search would finally kick in and find the answer (similar questions). Other times I'd post the question, if it didn't get closed, maybe I'd get an answer a few hours or days later.
Now I just ask ChatGPT or Gemini and more often than not it gives me the answer. That alone and nothing else (agent modes, AI editing or generating files) is enough to increase my output. I get answers 10x faster than I used to. I'm not sure what that has to do with the point about learning. Getting answers to those question is learning, regardless of where the answer comes from.
What do you think will happen when everyone is using the AI tools to answer their questions? We'll be back in the world of Encyclopedias, in which central authorities spent large amounts of money manually collecting information and publishing it. And then they spent a good amount of time finding ways to sell that information to us, which was only fair because they spent all that time collating it. The internet pretty much destroyed that business model, and in some sense the AI "revolution" is trying to bring it back.
Also, he's specifically talking about having a coding tool write the code for you, he's not talking about using an AI tool to answer a question, so that you can go ahead and write the code yourself. These are different things, and he is treating them differently.
Well, not technically, but I know someone who is.
Where are the AI-driven breakthroughs? Or even the AI-driven incremental improvements? Do they exist anywhere? Or are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?
So the question isn't simply whether storage is wasted; it's how much waste there is relative to the environmental impact. Granted, books and photographs don't need to be continuously fed energy to make the information available. However, the cost of storage is now so cheap that even with 90% waste, it's economically viable to keep it online. So the problem, if you can call it one, is that energy is too cheap, and externalities are not accounted for in the cost.
I'm reasonably certain that this statistic is completely made up. The best number I can find for the proportion of library books that are never borrowed was from a university library, and was 25%.
Early AI hype cycles, after all, is where Prolog, like Lisp, shone.
Checks
Oh my god. 11/10.