Readit News logoReadit News
be7a commented on Gemini with Deep Think achieves gold-medal standard at the IMO   deepmind.google/discover/... · Posted by u/meetpateltech
be7a · a month ago
Super interesting that they moved away from their specialized, Lean-based system from last year to a more general-purpose LLM + RL approach. I would suspect this likely leads to improved performance even outside of math competitions. It’ll be fascinating to see how much further this frontier can go.

The article also suggests that the system used isn’t too far ahead of their upcoming general "DeepThink" model / feature, which is they announced for this summer.

be7a commented on Making 2.5 Flash and 2.5 Pro GA, and introducing Gemini 2.5 Flash-Lite   blog.google/products/gemi... · Posted by u/meetpateltech
zzleeper · 2 months ago
Good luck using 2.5 for anything non-trivial.

I have about 500,000 news articles I am parsing. OpenAI models work well but found Gemini had fewer mistakes.

Problem is; they give me a terrible 10k RPD limit. To increase to the next tier, they then require a minimum amount of spending but I can't reach that amount even when maxing the RPD limit for multiple days in a row.

I emailed them twice and completed their forms but everyone knows how this works. So now I'm back at OpenAI, with a model with a bit more mistakes but that won't 403 me after half an hour of using it due to their limits.

be7a · 2 months ago
The rate limits apply only to the Gemini API. There is also Vertex from GCP, which offers the same models (and even more, such as Claude) at the same pricing, but with much higher rate limits (basically none, as long as they don't need to cut anyone off with provisioned throughput iiuc) and with a process to get guaranteed throughput.
be7a commented on uv downloads overtake Poetry for Wagtail users   wagtail.org/blog/uv-overt... · Posted by u/ThibWeb
qwertox · 5 months ago
I've read so much positive feedback about uv, that I'd really like to use it, but I'm unsure if it fits my needs.

I was heavily invested into virtualenv until I had to upgrade OS versions, which upgraded the Python versions and therefore broke the venvs.

I tried to solve this by using pyenv, but the need of recompiling Python on every patch wasn't something which I would accept, specially in regards to boards like Raspberry Pis.

Then I tried miniconda which I initially only liked because of the precompiled Python binaries, and ultimately ended up using pyenv-managed miniforge so that I could run multiple "instances" of miniforge and therefore upgrade miniforge gradually.

Pyenv also has a plugin which allows to set suffixes to environments, which allows me to have multiple miniforges of the same version in different locations, like miniforge-home and miniforge-media, where -home has all files in the home dir and -media has all files on a mounted nvme, which then is where I put projects with huge dependencies like CUDA inside, not cluttering home, which is contained in a VM image.

It works really great, Jupyter and vscode can use them as kernels/interpreters, and it is fully independent of the OS's Python, so that OS upgrades (22.04 -> 24.04) are no longer an issue.

But I'm reading about all these benefits of uv and wish I could use it, but somehow my setup seems to have tied my hands. I think I can't use uv in my projects.

Any recommendations?

Edit: Many of my projects share the same environment, this is absolutely normal for me. I only create a new environment if I know that it will be so complex that it might break things in existing environments.

be7a · 5 months ago
Have you checked out https://github.com/prefix-dev/pixi? It's built by the folks who developed Mamba (a faster Conda implementation). It supports PyPI dependencies using UV, offers first-class support for multi-envs and lockfiles, and can be used to manage other system dependencies like CUDA. Their CLI also embraces much of the UX of UV and other modern dependency management tools in general.
be7a commented on Mastermind Solver   stefanabikaram.com/blog/m... · Posted by u/stefanpie
be7a · 2 years ago
Mastermind intrigued me in the same way as the author some time ago, and I've used it as a standard problem when trying out new computational frameworks/methods ever since.

Here is my Rust version with multi-threading, SIMD, WASM running on your device inside a WebApp: https://0xbe7a.github.io/mastermind/

Repo: https://github.com/0xbe7a/mastermind

It is quite fast (1.8 Billion position pairs evaluated in 1652ms on my device) and can also exploit some symmetries inside the solution space.

u/be7a

KarmaCake day22September 14, 2019
About
https://github.com/0xbe7a https://twitter.com/0xbe7a
View Original