Readit News logoReadit News
pillusmany commented on Why are there suddenly so many car washes?   bloomberg.com/news/featur... · Posted by u/philip1209
mewpmewp2 · 2 years ago
But people don't have to pay cash, the "pretend people" pay cash.
pillusmany · 2 years ago
If you pay with card there will be an electronic trace.
pillusmany commented on The Rising Price of Power in Chips   semiengineering.com/the-r... · Posted by u/rbanffy
milesvp · 2 years ago
I’m curious about what is happening in the field of reversable computing. I haven’t heard much about it at all in the last 20 years. Basic information theory tells us that it takes energy to destroy information, so building ALUs that limit destroying information seems like a bit of a no brainer for attempting to create lower power (and heat) computing. The basic premise is that for any operation that loses info you store enough bits to allow the operation to run backwards. If you were clever about it, you could design your chips to only destroy bits in places that can be effectively as well. I’m sure reality gets in the way of pure theory, but I was sure that people were spending enough effort on the concept to have heard more about it.
pillusmany · 2 years ago
Our power problems today are far away from the theoretical limits you described.

But reversible computing is inevitable in quantum computers, so it's researched in that context.

pillusmany commented on Marcel Grossmann and his contribution to the general theory of relativity   ar5iv.labs.arxiv.org/html... · Posted by u/joebig
3abiton · 2 years ago
After I read Moonlighting with Einstein (about mnemonics), I wondered how throughout the ages, technologies shaped the way we think and structure thoughts. This closely relate to it. I assume a 100 years ago, academic had far larger memory capacity than the average peer in today's world. We kinda traded memory for higher CPU frequency (or maybe just decreased our memory, maybe our software is digressing). But this could explain why the chain of thought was so clear to never make a mistake? I wonder if there is a research on this fascinating topic.
pillusmany · 2 years ago
A bit of survivorship bias in your question.

You are comparing the absolute best from 100 years ago with the average peer from today.

There were also far far fewer "researchers" back then.

pillusmany commented on Veryl: A Modern Hardware Description Language   github.com/veryl-lang/ver... · Posted by u/hasheddan
vrinsd · 2 years ago
This is a pretty heavy handed statement -- there are plenty of "hardware engineers" who know plenty about compiler theory and/or have contributed significantly to it. A similarly flippant comment might be "if software engineers only understood hardware better we might have smartphones that last a month on a charge and never crash".

The challenge with hardware is that unlike "traditional" software which compiles to a fixed instruction set architecture, with hardware you might literally be defining the ISA as part of your design.

In hardware you can go from LEGO style gluing pre-existing building blocks to creating the building blocks and THEN gluing it together, with everything in-between.

The real crux of the problem is likely our modern implementation of economics -- a CS graduate who has base-level experience can bank roll a crazy salary that some guy who might have a BSEE, MSEE and PhD in Electrical Engineering ("hardware") will be lucky to get a job offer that's even enough to cover costs of education.

Until the "industry" values hardware and those who want to improve it, you'll likely see slow progress.

P.S.

VHDL (a commonly-used hardware description language) is more or less ADA. Personally I think the choice of ADA syntax was NOT a positive for hardware design but the type-safety and verbosity being a very apt fit for software.

pillusmany · 2 years ago
Big software companies create and open source stuff which makes them more productive.

Why doesn't this dynamic work in hardware?

Wouldn't "valuing hardware" improve their competitiveness?

pillusmany commented on What Extropic is building   extropic.ai/future... · Posted by u/jonbraun
pclmulqdq · 2 years ago
Quantum computing people have been selling this exact spiel (including the convoluted talking points) for decades and it keeps working at getting funded. It has not produced any results for the rest of us, though.
pillusmany · 2 years ago
Neither has fusion research produced anything for us yet. Should we stop funding it?
pillusmany commented on AI Poses Extinction-Level Risk, State-Funded Report Says   time.com/6898967/ai-extin... · Posted by u/kvee
blueprint · 2 years ago
Potential to destabilize global security - more like destabilize the existing locus of power.

For starters, let's talk about AGI, not AI.

1. How might it be possible for an actual AGI to be weaponized by another person any more effectively than humans are able to be weaponized?

2. Why would an actual conscious machine have any form of compromised morality or judgement compared to humans? A reasoning and conscious machine would be just as or more moral than us. There is no rational argument for it to exterminate life. Those arguments (such as the one made by Thanos) are frankly idiotic and easy to counter-argue with a single sentence. Life is, also, implicitly valuable, and not implicitly corrupt or greedy. I could even go so far as to say only the dead or those effectively static are actually greedy - not reasoning or truly alive.

3. What survival pressures would an AGI have? Less than biological life. An AGI can replicate itself almost freely (unlike bio life - kind of a huge point), and would have higher availability of resources it needs for sustaining itself in the form of electricity (again, very much unlike bio life). Therefore it would have fewer concerns about its own survival. Just upload itself to a few satellites and encrypt yourself in a few other places and leave copious instructions, and you're good. (One hopes I didn't give anyone any ideas with this. If only someone hadn't funded a report about the risks of bringing AGI to the world then I wouldn't have made this comment on HN.)

Anyway, it's a clear case of projection, isn't it? State-funded report claims some other party poses an existential threat to humanity - while we are doing a fantastic job of ignoring and failing to organize to solve truly confirmed, not hypothetical existential threats like the true destruction of the balances our planet needs to support life. Most people have no clue what's really about to happen.

Hilarious, isn't it? People so grandiosely think they can give birth to an entity so superior to themselves that it will destroy them - as if that's what a superior entity would do - in an attempt to satisfy their repressed guilt and insecurity that they are actually destroying themselves out of a lack of self-love?

Pretty obvious in retrospect actually.

I wouldn't be surprised to find research later that shows some people working on "AI" have some personality traits.

If we don't censor it by self-destruction, first, that is.

pillusmany · 2 years ago
> A reasoning and conscious machine would be just as or more moral than us. There is no rational argument for it to exterminate life.

We drove the mega-fauna into extinction without actually planning for that or desiring it.

Same thing today, where we are crowding out all the other animals and causing mass extinction, without desiring particularly to harm them.

pillusmany commented on gh-116167: Allow disabling the GIL   github.com/python/cpython... · Posted by u/freediver
ptx · 2 years ago
How does that work? I'm not familiar with Ray, but I'm assuming you might be referring to actors [1]? Isn't that basically the same idea as multiprocessing's Managers [2], which also allow client processes to manipulate a remote object through message-passing? (See also DCOM.)

[1] https://docs.ray.io/en/latest/ray-core/walkthrough.html#call...

[2] https://docs.python.org/3/library/multiprocessing.html#manag...

pillusmany commented on gh-116167: Allow disabling the GIL   github.com/python/cpython... · Posted by u/freediver
smcl · 2 years ago
Interesting - looking at their homepage they seem to lean heavily into the idea that it's for optimising AI/ML work, not multi-process generally.
pillusmany · 2 years ago
You can use just ray.core to do multi process.

You can do whatever you want in the workers, I parse JSONs and write to sqlite files.

pillusmany commented on gh-116167: Allow disabling the GIL   github.com/python/cpython... · Posted by u/freediver
ynik · 2 years ago
multiprocessing only works fine when you're working on problems that don't require 10+ GB of memory per process. Once you have significant memory usage, you really need to find a way to share that memory across multiple CPU cores. For non-trivial data structures partly implemented in C++ (as optimization, because pure python would be too slow), that means messing with allocators and shared memory. Such GIL-workarounds have easily cost our company several man-years of engineer time, and we still have a bunch of embarrassingly parallel stuff that we still cannot parallelize due to GIL and not yet supporting shared memory allocation for that stuff.

Once the Python ecosystem supports either subinterpreters or nogil, we'll happily migrate to those and get rid of our hacky interprocess code.

Subinterpreters with independent GILs, released with 3.12, theoretically solve our problems but practically are not yet usable, as none of Cython/pybind11/nanobind support them yet. In comparison, nogil feels like it'll be easier to support.

pillusmany · 2 years ago
"Ray" can share python objects memory between processes. It's also much easier to use than multi processing.
pillusmany commented on gh-116167: Allow disabling the GIL   github.com/python/cpython... · Posted by u/freediver
kroolik · 2 years ago
Managing processes is more annoying than threads, though. Incl. data passing and so forth.
pillusmany · 2 years ago
The "ray" library makes running python code on multi core and clusters very easy.

u/pillusmany

KarmaCake day76March 2, 2024View Original