Readit News logoReadit News
nbsande commented on India orders smartphone makers to preload state-owned cyber safety app   reuters.com/sustainabilit... · Posted by u/jmsflknr
nbsande · a month ago
> With more than 5 million downloads since its launch, the app has helped block more than 3.7 million stolen or lost mobile phones, while more than 30 million fraudulent connections have also been terminated.

I might be reading this wrong but these numbers seem very weird. Did more than half the people who downloaded the app block a stolen phone? And did each person who downloaded the app terminate 6 fraudulent connections?

nbsande commented on Asyncio: A library with too many sharp corners   sailor.li/asyncio... · Posted by u/chubot
btown · 5 months ago
I really wish the community had coalesced around gevent.

- no async/await, instead every possible thing that could block in the standard library is monkey-patched to yield to an event loop

- this means that you can write the same exact code for synchronous and concurrent workflows, and immediately get levels of concurrency only bounded by memory limits

- you'll never accidentally use a synchronous API in an async context and block your entire event loop (well, you can, if you spin on a tight CPU-bound loop, but that's a problem in asyncio too)

- the ecosystem doesn't need to implement libraries for asyncio and blocking code, everything just works

There's a universe where the gevent patches get accepted into Python as something like a Python 4, breaking some niche compatibility but creating the first ever first-class language with green-threading at its core.

But in this universe, we're left with silly things like "every part of the Django ecosystem dealing with models must be rewritten with 'a' prefixes or be abandoned" and it's a sad place indeed.

nbsande · 5 months ago
While having the same code for sync and async sounds nice, monkey patching code at runtime seems hacky. Any library that wants to use a lower level implementation of network calls would need to handle the monkey patching themselves i assume.
nbsande commented on Show HN: Bringing multithreading to Python's async event loop   github.com/NeilBotelho/tu... · Posted by u/nbsande
m11a · a year ago
If I understand correctly, it sounds like the idea is to map N tasks to M threads.

I suppose it’d only really be useful if you have more tasks than you can have OS threads (due to the memory overhead of an OS thread), then maybe 10,000 tasks can run in 16 OS threads.

If that’s the case, then is this useful in any application other than when you have way too many threads to feasibly make each task an OS thread?

nbsande · a year ago
The idea is to map N tasks to M threads. This is useful more than just when you needd more threads than the OS can spin up. As you scale up the number of threads you increase context switching and cpu scheduling overhead. Being able to schedule A large number of tasks with a small number of threads could reduce this overhead.
nbsande commented on Show HN: Bringing multithreading to Python's async event loop   github.com/NeilBotelho/tu... · Posted by u/nbsande
bastawhiz · a year ago
This was exactly my question. Why do you even need an event loop? If awaits are just thread joins then what is the event loop actually doing? IO can just block, since other coroutines are on other threads and are unaffected.

Which is to say, why even bother with async if you want your code to be fully threaded? Async is an abstraction designed specifically to address the case where you're dealing with blocking IO on a single thread. If you're fully threaded, the problems async addresses don't exist anymore. So why bother?

nbsande · a year ago
Having too many threads all running at the same time can also cause a performance hit, and I don't mean hitting the OS limit on threads. The more threads you have running in parallel(remember this is considering a GIL-less setup) the more you need to context switch between the. Having fewer threads all running in an event loop allows you to manage more events with only a few threads, for example setting the number of event loop threads to the number of cores on the cpu.
nbsande commented on Show HN: Bringing multithreading to Python's async event loop   github.com/NeilBotelho/tu... · Posted by u/nbsande
gloryjulio · a year ago
The problem is that your example is not most of the companies that uses python are facing, which is the majority of the python code. They want some kind of performance uplift without rewriting the whole python code base. It's cheaper if python keeps getting some kind of upgrade

An example is facebook's php to hack compiler

nbsande · a year ago
The use case I wrote it in mind with is FastAPI. In that case, there wouldn't be any change to the Python code. You'd just use a different ASGI server that would use this sort of multithreaded event loop. So instead of running it with uvicorn main:app, you'd run it with alternateASGI main:app.

I have an example of a very basic ASGI server that does just that towards the end of the blog

nbsande commented on Show HN: Bringing multithreading to Python's async event loop   github.com/NeilBotelho/tu... · Posted by u/nbsande
game_the0ry · a year ago
I am fairly confident I will get some down votes for this, but here goes...

When I am trying to solve a technical problem, the problem is going to dictate my choice of tooling.

If I am doing some fast scripting or I need to write some glue code, python is my go-to. But if I have a need for resource efficiency, multi threading, non-blocking async i/o, and/or hi performance, I would not consider python - I would probably use JVM over the best python option.

Don't get me wrong, I think its a worthwhile effort to explore this effort, and I certainly do not think its a wasted effort (quite the opposite, this gets my up vote) I just don't think I would ever use it if I had use case for perf and resource efficiency.

nbsande · a year ago
Hard agree. If you want resource efficiency and high performance you're probably better off looking to lower level languages most of the time. In my experience FastAPI usually gets used by teams that need a server done quickly and simply or are constrained by a lack of experience in low level languages. That being said, I do think its worthwhile trying to improve efficiency slightly even for these cases.
nbsande commented on Show HN: Bringing multithreading to Python's async event loop   github.com/NeilBotelho/tu... · Posted by u/nbsande
quotemstr · a year ago
At that point, why bother with asyncio? What we really want is something like Java virtual threads, something that doesn't have a code color.
nbsande · a year ago
Hmmm. That would indeed be better. Seems like an interesting experiment to try and implement virtual threads for python!
nbsande commented on Show HN: Bringing multithreading to Python's async event loop   github.com/NeilBotelho/tu... · Posted by u/nbsande
noident · a year ago
> It got me wondering if it was actually possible to make Python’s async event loop work with multiple threads.

There is built-in support for this. Take a look at loop.run_in_executor. You can await something scheduled in a separate Thread/ProcessPoolExecutor.

Granted, this is different than making the async library end-to-end multi-threaded as you seem to be trying to do, but it does seem worth mentioning in this context. You _can_ have async and multiple threads at the same time!

nbsande · a year ago
run_in_executor is pretty powerful for running sync code in async code, but my use case was more making async code utlize the cpu better. I think, just using run_in_executor would add a lot of complication and changes to how you use async await. But great point none the less!
nbsande commented on Apple introduces M4 chip   apple.com/newsroom/2024/0... · Posted by u/excsn
paulpan · 2 years ago
I don't think it's strictly for price gouging/segmentation purposes.

On the Macbooks (running MacOS), RAM has been used as data cache to speed up data read/write performance until the actual SSD storage operation completes. It makes sense for Apple to account for with higher RAM spec for the 1TB/2TB configurations.

nbsande · 2 years ago
If I'm understanding your point correctly that wouldn't prevent them from offering higher ram specs for the lower storage eg. 512 gig macs. So it seems like it is just price gouging

u/nbsande

KarmaCake day80November 4, 2020View Original