The reason I love it so much is that it's just so straightforward to make server or client that can talk to it. All of our embedded Linux systems are written in C++ right now and they have absolutely no problem publishing and consuming messages in our standard format. One of the original driving factors for this is that we do have some web-based and Electron-based UIs and any protocol that we made that wasn't HTTP-based or Websocket-based would require them to do twice as much work: first, connecting to whatever service from a "backend" server and implementing whatever protocol it needed, and second exposing that backend service to the frontend over a Websocket (generally... since it needed live updates). By standardizing on our in-flight services just exposing everything as Websockets natively we pretty much eliminated a whole tier of complicated logic. The frontends have a single generic piece of code that has standardized reconnect/timeout/etc logic in it, and the backends just have to #include <WSServer.h> and instantiate an object to be able to publish to listeners.
I definitely didn't start there. And I 100% understand where your opinion comes from... from so many different angles a lot of the "modern" web systems shouldn't come within a mile of a safety critical system. Websockets though? They're great! And while JSON isn't necessarily the most efficient encoding, it sure does make debugging easy. We run everything on a closed network that usually doesn't have an Internet connection, so we don't run TLS in between the ground and air systems. If we need to figure out what's going on and an interface is acting up, we can just tcpdump it and have human-readable traffic to inspect.
The flight critical stuff is isolated from all of this and spits out a serial telemetry feed (Mavlink). We do send that directly to the ground station over a dedicated radio, but we also have an airborne service that cooks that into Websockets and in many cases the Websocket-over-very-special-WiFi connection has been more robust than the 915MHz serial link.
And it's not as if existing protocols like NMEA are all that good either.
If you have a specific concern, explain it, instead of just exuding judgment.
Simply firing up cli for troubleshooting and debugging of any kind should not be "expected behavior" at sea.
Has an excellent community of developers and hardware components.
If you want it to “remember” things you do that by appending all the previous conversations together and supply it in the input string.
In an ideal world this would work perfectly. It would read through the whole conversation and would provide the right output you expect, exactly as if it would “remember” the conversation. In reality there are all kind of issues which can crop up as the input grows longer and longer. One is that it takes more and more processing power and time for it to “read through” everything previously said. And there are things like what jmiskovic said that the output quality can also degrade in perhaps unexpected ways.
But that also doesn’t mean that “ refinement of output is arrived at through conversation isn't possible”. It is not that black and white, just that you can run into troubles as the length of the discussion grows.
I don’t have direct experience with long conversations so I can’t tell you how long is definietly too long, and how long is still safe. Plus probably there are some tricks one can do to work around these. Probably there are things one can do if one unpacks that “black box” understanding of the process. But even without that you could imagine a “consolidation” process where the AI is instructed to write short notes about a given length of conversation and then those shorter notes would be copied in to the next input instead of the full previous conversation. All of these are possible, but you won’t have a turn-key solution for it just yet.
A bit of a shower thought, but I think you can probably generalize that idea: Most birds spend a significant part of their life in the air, looking down onto human-designed landscapes. The aerial view of our cities is probably as familiar to them as our neighborhood streets are to us.
But unlike Google Maps, they see the city moving - all the cars, pedestrians, trams, railways, etc. It seems likely to me that if this is what you see day-in, day-out, because it is literally the space you are living in, you will pick up some general patterns in what you see and might even start to experiment how those patterns can be exploited.