My mother is also data point - grew up on a farm where her father used it. She was diagnosed with Parkinson's 2018.
My mother is also data point - grew up on a farm where her father used it. She was diagnosed with Parkinson's 2018.
It’s so foreign to me that any retail place would defer to “the computer” if display price and database price were out of sync.
Even young-me understood the idea of “oh yeah, our bad, have it at the lower price” and the potential for legal action if we did otherwise.
It’s to prevent employees from stealing. To “defer to the tag” requires a manual price override of some sort, which becomes an abuse vector.
Noprofit X publishes outputs from competing AI, which is not copyrightable.
Corp Y injests content published by Nonprofit X.
You don't hire architects to execute a demolition and you also don't hire anyone heavily invested in keeping the building standing. But you DO hire people loyal to you to perform the work, who will receive staunch opposition the latter group of people.
While Twitter doesn't have downvoting, it is still dealing with "report brigades" - various interest groups will organize via Telegram (or similar) to mass-report tweets they don't like.
I wonder if you could strike a balance by incorporating downvotes as a visual metric, but not using it to rank content, thus allowing the expression of dislike while removing the abuse vector.
Nothing is forcing people to use Teams, but they do. It can't solely be cause it's just bundled and free. People don't want to spend any effort to do better? Is the friction that high?
Maybe in startups and small companies without a dedicated IT team, but an enterprise IT group will absolutely stop you. And Teams is very easy for them to administrate if they are already deploying MS products.
I've been using Claude pretty intensively over the last week and it's so much better than GPT. The larger context window (200k tokens vs ~16k) means that it can hold almost the entire codebase in memory and is much less likely to forget things.
The low request quota even for paid users is a pain though.
Just to add some clarification - the newer GPT4 models from OpenAI have 128k context windows[1]. I regularly load in the entirety of my React/Django project, via Aider.
1. https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turb...
What are the benefits of using Fructose over LMQL, Guidance or OpenAI's function calling?
So Harmony? Or something older? Since Z.ai also claim the thinking mode does tool calling and reasoning interwoven, would make sense it was straight up OpenAI's Harmony.
> in theory, I could get a "relatively" cheap Mac Studio and run this locally
In practice, it'll be incredible slow and you'll quickly regret spending that much money on it instead of just using paid APIs until proper hardware gets cheaper / models get smaller.
Yes, as someone who spent several thousand $ on a multi-GPU setup, the only reason to run local codegen inference right now is privacy or deep integration with the model itself.
It’s decidedly more cost efficient to use frontier model APIs. Frontier models trained to work with their tightly-coupled harnesses are worlds ahead of quantized models with generic harnesses.