With PostgreSQL the materialized view won't be automatically updated though, you need to do `REFRESH MATERIALIZED VIEW` manually.
Deleted Comment
With PostgreSQL the materialized view won't be automatically updated though, you need to do `REFRESH MATERIALIZED VIEW` manually.
https://w3c.github.io/web-share-target/
https://developer.mozilla.org/en-US/docs/Web/Progressive_web...
Use that, and the browser/native platform integration is already there, and ShareOpenly becomes more of stopgap measure.
The only real problem is that you can’t feature-detect share_target support—so you can’t detect if the user is able to add a web app to the user agent’s share targets.
As for ShareOpenly using these things, see https://shareopenly.org/share/?url=https://example.com, and it requires the user to paste a value in once, and then by the looks of it it will remember that site via a cookie. Not great, but I guess it works. But I’m sceptical anyone will really use it.
I didn't know this existed, so the first thing I did is check the caniuse website, and yeah not even they have info about the Web Share Target API[1][2]. As of writing this comment, they only have info about the Web Share API[3].
[1]: https://github.com/Fyrd/caniuse/issues/4670
What has been an issue for me, though, is working with private repositories outside GitHub (and I have to clarify that, because working with private repositories on GitHub is different, because Go has hardcoded settings specifically to make GitHub work).
I had hopes for the GOAUTH environment variable, but either (1) I'm more dumb and blind than I thought I already was, or (2) there's still no way to force Go to fetch a module using SSH without trying an HTTPS request first. And no, `GOPRIVATE="mymodule"` and `GOPROXY="direct"` don't do the trick, not even combined with Git's `insteadOf`.
Not supporting type stripping in node_modules is unfortunate
Writing a library in TypeScript (with typechecks in CI/CD as devDependencies) and just importing it directly from Node.js...
https://codeberg.org/forgejo/forgejo/issues/8030
then it just looks like a bad joke with all the anime girls and everything else...
If I use composite actions, the logs get associated with the wrong step[1]. It's just a visual thing (the steps themselves run fine), but having 90% of your action logs in the "Complete job" step is unpleasant.
For reusable workflows there's a few open issues as well, but what happens in my case is that jobs just don't start at all, they stay as "Waiting" forever.
These issues only matter if you write your own reusable actions with YAML (the actions written in JavaScript seem to work fine), but it's worth mentioning.
Other than these two issues, I'm very happy with Forgejo and would still recommend it if people ask for my opinion.
Ever watched Ghost in the Shell?
This. It's like if you want to collect biometric data about everyone's faces with different expressions, different angles, and how those faces change over time, you just make a mobile app where people voluntarily record themselves.
So, if the problems are:
>> It requires several electrodes to be implanted into the patient first. Then there's an adaptation phase in which the patient trains the system.
Then one possible way I can think of to make people do your work for you, is to release a nice VR videogame to the point it becomes popular, and have some features that make it nicer if you ("enhanced controls", or "your HUD shows exactly what you want just by thinking it like Ironman helmet", or whatever).
Taking an existing and popular videogame and making a mod like this would also work.
There's non-zero desire for full-dive MMORPGs, so marketing it like a step towards that would entice a non-zero amount of gamers.
Once it's normalized on niches like that you'll probably have a better time expanding outside that niche, because by then it would be "that videogame tech thingy that cool and rich streamers use" rather than "the sus mind reading stuff".
It doesn't need to be videogames, but the idea is the same, you make an "inoffensive" thing that people want to use, and then leech off the collected data.
At the limit, this problem is the problem of "keeping secrets while not keeping secrets" and is unsolvable. If you've shared your site content to one entity you cannot control, you cannot control where your site content goes from there (technologically; the law is a different question).
Proprietary web browsers are in a really good position to do something like this, especially if they offer a free VPN. The browser would connect to the "VPN servers", but it would be just to signal that this browser instance has an internet connection, while the requests are just proxied through another browser user.
That way the company that owns this browser gets a free network of residential IP address ready to make requests (in background) using a real web browser instance. If one of those background requests requires a CAPTCHA, they can just show it to the real user, e.g. the real user visits a Google page and they see a Cloudflare CAPTCHA, but that CAPTCHA is actually from one of the background requests (while lying in its UI and still showing the user a Google URL in the address bar).
So then, if we can cook a chicken like this, we can also heat a whole house like this during winters, right? We just need a chicken-slapper that's even bigger and even faster, and slap the whole house to heat it up.
There's probably better analogies (because I know people will nitpick that we knew about fire way before kinetic energy), so maybe AI="flight by inventing machines with flapping wings" and AGI="space travel with machines that flap wings even faster". But the house-sized chicken-slapper illustrates how I view the current trend of trying to reach AGI by scaling up LLMs.