As most of us here can see, for many tasks now you don't really need to worry that you have the exact right syntax. I think you still need expert precision when it matters immensely, but we all develop tools, scripts, layers and the like that manual precision isn't necessary.
The API breakage coincides pretty well with their brand new CTO, whose objective is apparently "transformation to a smart access software company".
It's unclear if the CTO just doesn't understand that "DDoS" generally implies malice, or if they're intentionally using that language to blame users for using their product.
Good news: ratgdo, an ESP-based local solution works great. I hope the author is making a decent profit on the kits.
I used a local Meross install on my old garage doors, time to break them out, but ugh...
Seems like sacrificing some quality for large gains on speed and cost but anyone know more detail?
I'm definitely curious on the context window increase-- I'm having a hard time telling if it's 'real' vs a fast specially trained summarization prework step. That being said, it's been doing a rather solid job not losing info in that context window in my minor anecdotal use cases.
Read the following passage from [new ML article]. Identify their assumptions, and tell me which mathematical operations or procedures they use depend upon these assumptions.
GPT-4: Usually correctly identifies the assumptions, and often quotes the correct mathematics in its reply.
GPT-4 Turbo: Sometimes identifies the assumptions, and is guaranteed to stop trying at that point and then give me a Wikipedia-like summary about the assumptions rather than finish the task. Further prompting will not improve its result.
``` from cataclysm import doom
def mystery_func(): while True: pass
# predict if specified function halts print(doom.does_it_halt(mystery_func)) ```
I'll have to try that when I get back to my desk!
Don't get me wrong, it has emergent properties (more than you would expect from a fancy autocomplete), but factual output was never GPT-4's nor any other LLM's design goal.
To be fair, that's what pretty much every person does. The bar does seem pretty high if we need more than that (especially if not specifically trained on a topic). It's not a universally perfect expert servant, but I've been exploring the code generation of GPT4 in detail (i.e. via the 'cataclysm' module I just posted about). In 1 minute it can write functions as good as the average developer intern most of the time.
We're keeping score in a weird way if we're responding quickly with it needing to "code without subtle but important errors". Because that's the majority of human developers, too. I've been writing code for 30 years, and if you put a gun to my head, I would still have subtle but important flaws in every first typing of any complex generated code.
I'm not saying you're bashing it, by the way, I get your point, but I do worry a bit when the first response is citing that the SOTA models get things wrong in 0-shot situations without full context. That's describing all of us.
from cataclysm import doom
# App gets the img file from the command line and saves it as a new file at half size with _half appended to the name
doom.resize_app()
Turned out to be all that's needed for a command-line file resize app (with PIL installed).
Similar fun concept as the cataclysm library for Python: https://github.com/Mattie/cataclysm