- your function arguments aren't serializable - your side effects (e.g. database writes) aren't idempotent - discovering what backpressure is and that you need it - losing queued tasks during deployment / non-compatible code changes
There's also some stuff particular to celery's runtime model that makes it incredibly prone to memory leaks and other fun stuff.
Honestly, it's a great education.
Before I knew it I was helping organize the user group including our weekly coffee shop meetups in addition to the monthly lecture gatherings. There were a lot of local startups (including some very well known businesses and non-profits today) very actively collaborating on these tools. Django was really evolving the way a lot of companies used software and automation.
It wasn't only the engineering, the community ethos of Django both at the local and international scale (and the Python community as a whole) really made it possible to branch out and accelerate my personal software engineering journey.
I do wonder how much deep search really matters when people only really expect to look at the first page.
[0]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/da...
In my experience, you don't have to spend a lot of time thinking about scoring and relevancy for these types of search. Generally you only want to include a small edit distance in the results at all to handle misspellings.
This is so vastly different when you have a corpus of millions of documents about an encyclopedia's worth of topics.
> I do wonder how much deep search really matters when people only really expect to look at the first page.
Getting the first page to have the best quality and relevancy is much more difficult if the user is searching through something like scientific papers, stock video footage. It is a challenge in bridging the distance between ideas and expectations.
- Does the searcher already know the result they are looking for? (If yes, much easier)
- Are there subjective and objective qualities of the results which should alter the search score, sometimes separate from the text being indexed? (If yes, much harder)
- What is the quality of the text being indexed? (If end-user provided, this will vary widely)
Ultimately, building good search is often a struggle against providing the best possible results between searcher intent and incomplete document evaluation criteria. People never really think about when a search is working really well, but they definitely know and complain when it's working poorly.The service scorecard asks a bunch of reflective questions about the ramifications of making some set of functions a unique service and points its benefits or lack thereof on a scale.
I'm getting cf-mitigated: challenge on openai API requests.
https://www.cloudflarestatus.com/https://status.openai.com/