Readit News logoReadit News
quectophoton commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
SalmoShalazar · a day ago
The forgone conclusion that LLMs are the key or even a major step towards AGI is frustrating. They are not, and we are fooling ourselves. They are incredible knowledge stores and statistical machines, but general intelligence is far more than these attributes.
quectophoton · a day ago
My thoughts are that LLMs are like cooking a chicken by slapping it: yes, it works, but you need to reach a certain amount of kinetic energy (the same way LLMs only "start working" after reaching a certain size).

So then, if we can cook a chicken like this, we can also heat a whole house like this during winters, right? We just need a chicken-slapper that's even bigger and even faster, and slap the whole house to heat it up.

There's probably better analogies (because I know people will nitpick that we knew about fire way before kinetic energy), so maybe AI="flight by inventing machines with flapping wings" and AGI="space travel with machines that flap wings even faster". But the house-sized chicken-slapper illustrates how I view the current trend of trying to reach AGI by scaling up LLMs.

Deleted Comment

quectophoton commented on Materialized views are obviously useful   sophiebits.com/2025/08/22... · Posted by u/gz09
quectophoton · a day ago
> And then by magic the results of this query will just always exist and be up-to-date.

With PostgreSQL the materialized view won't be automatically updated though, you need to do `REFRESH MATERIALIZED VIEW` manually.

quectophoton commented on FFmpeg 8.0   ffmpeg.org/index.html#pr8... · Posted by u/gyan
RedShift1 · 2 days ago
I'm ok with regex, but the ffmpeg manpage, it scares me...
quectophoton · 2 days ago
Ffmpeg was designed to be unusable if it falls into enemy hands.
quectophoton commented on What about using rel="share-url" to expose sharing intents?   shkspr.mobi/blog/2025/08/... · Posted by u/edent
chrismorgan · 3 days ago
There’s already a better spec from 2016, which has even been shipping in the Chromium family since 2019 (Android) or 2021 (desktop):

https://w3c.github.io/web-share-target/

https://developer.mozilla.org/en-US/docs/Web/Progressive_web...

Use that, and the browser/native platform integration is already there, and ShareOpenly becomes more of stopgap measure.

The only real problem is that you can’t feature-detect share_target support—so you can’t detect if the user is able to add a web app to the user agent’s share targets.

As for ShareOpenly using these things, see https://shareopenly.org/share/?url=https://example.com, and it requires the user to paste a value in once, and then by the looks of it it will remember that site via a cookie. Not great, but I guess it works. But I’m sceptical anyone will really use it.

quectophoton · 3 days ago
> The only real problem is that you can’t feature-detect share_target support

I didn't know this existed, so the first thing I did is check the caniuse website, and yeah not even they have info about the Web Share Target API[1][2]. As of writing this comment, they only have info about the Web Share API[3].

[1]: https://github.com/Fyrd/caniuse/issues/4670

[2]: https://github.com/Fyrd/caniuse/issues/4906

[3]: https://caniuse.com/web-share

quectophoton commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
quectophoton · 3 days ago
In practice, none of these thing mentioned in the article have been an issue for me, at all. (Upvoted anyway)

What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).

I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.

quectophoton commented on Node.js is able to execute TypeScript files without additional configuration   nodejs.org/en/blog/releas... · Posted by u/steren
rovingeye · 8 days ago
https://github.com/nodejs/node/issues/57215

Not supporting type stripping in node_modules is unfortunate

quectophoton · 8 days ago
But... that's like half the reason why I wanted this feature...

Writing a library in TypeScript (with typechecks in CI/CD as devDependencies) and just importing it directly from Node.js...

quectophoton commented on FFmpeg moves to Forgejo   code.ffmpeg.org/FFmpeg/FF... · Posted by u/whataguy
trialect · 8 days ago
Forgejo is great and all... up until you're trying to use your SSO with a user named 'admin':

https://codeberg.org/forgejo/forgejo/issues/8030

then it just looks like a bad joke with all the anime girls and everything else...

quectophoton · 8 days ago
There are also some minor issues with composite actions and reusable workflows.

If I use composite actions, the logs get associated with the wrong step[1]. It's just a visual thing (the steps themselves run fine), but having 90% of your action logs in the "Complete job" step is unpleasant.

For reusable workflows there's a few open issues as well, but what happens in my case is that jobs just don't start at all, they stay as "Waiting" forever.

These issues only matter if you write your own reusable actions with YAML (the actions written in JavaScript seem to work fine), but it's worth mentioning.

Other than these two issues, I'm very happy with Forgejo and would still recommend it if people ask for my opinion.

[1]: https://codeberg.org/forgejo/forgejo/issues/5049

quectophoton commented on A mind–reading brain implant that comes with password protection   nature.com/articles/d4158... · Posted by u/gnabgib
AnonymousPlanet · 8 days ago
What if you make people do the hard part voluntarily by making the device desirable to them? Including a receptor inside the scull. Then you just have to pick up the pieces.

Ever watched Ghost in the Shell?

quectophoton · 8 days ago
> What if you make people do the hard part voluntarily by making the device desirable to them?

This. It's like if you want to collect biometric data about everyone's faces with different expressions, different angles, and how those faces change over time, you just make a mobile app where people voluntarily record themselves.

So, if the problems are:

>> It requires several electrodes to be implanted into the patient first. Then there's an adaptation phase in which the patient trains the system.

Then one possible way I can think of to make people do your work for you, is to release a nice VR videogame to the point it becomes popular, and have some features that make it nicer if you ("enhanced controls", or "your HUD shows exactly what you want just by thinking it like Ironman helmet", or whatever).

Taking an existing and popular videogame and making a mod like this would also work.

There's non-zero desire for full-dive MMORPGs, so marketing it like a step towards that would entice a non-zero amount of gamers.

Once it's normalized on niches like that you'll probably have a better time expanding outside that niche, because by then it would be "that videogame tech thingy that cool and rich streamers use" rather than "the sus mind reading stuff".

It doesn't need to be videogames, but the idea is the same, you make an "inoffensive" thing that people want to use, and then leech off the collected data.

quectophoton commented on Perplexity is using stealth, undeclared crawlers to evade no-crawl directives   blog.cloudflare.com/perpl... · Posted by u/rrampage
shadowgovt · 21 days ago
Not only is it difficult to solve, it's the next step in the process of harvesting content to train AIs: companies will pay humans (probably in some flavor of "company scrip," such as extra queries on their AI engine) to install a browser extension that will piggy-back on their human access to sites and scrape the data from their human-controlled client.

At the limit, this problem is the problem of "keeping secrets while not keeping secrets" and is unsolvable. If you've shared your site content to one entity you cannot control, you cannot control where your site content goes from there (technologically; the law is a different question).

quectophoton · 21 days ago
> companies will pay humans (probably in some flavor of "company scrip," such as extra queries on their AI engine) to install a browser extension that will piggy-back on their human access to sites and scrape the data from their human-controlled client.

Proprietary web browsers are in a really good position to do something like this, especially if they offer a free VPN. The browser would connect to the "VPN servers", but it would be just to signal that this browser instance has an internet connection, while the requests are just proxied through another browser user.

That way the company that owns this browser gets a free network of residential IP address ready to make requests (in background) using a real web browser instance. If one of those background requests requires a CAPTCHA, they can just show it to the real user, e.g. the real user visits a Google page and they see a Cloudflare CAPTCHA, but that CAPTCHA is actually from one of the background requests (while lying in its UI and still showing the user a Google URL in the address bar).

u/quectophoton

KarmaCake day1064February 8, 2023
About
World's Smallest Photon.
View Original