Readit News logoReadit News
ololobus commented on Show HN: OrioleDB Beta12 Features and Benchmarks   orioledb.com/blog/orioled... · Posted by u/akorotkov
ololobus · 5 months ago
Does it require core patches or I can install it into the standard upstream Postgres? Asking because, afaik, it did, but it might that something has changed already.
ololobus commented on AI Is Dehumanization Technology   thedabbler.patatas.ca/pag... · Posted by u/smartmic
kelseyfrog · 6 months ago
Whether we like it or not, AI sits at the intersection of both Moravec's and Jevon's paradox. Just as more efficient engines lead to increased gas usage, as AI gets increasingly better at problems difficult for humans, we see even greater proliferation within that domain.

The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.

We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?

ololobus · 6 months ago
Interesting point of view, didn't know about Jevon's paradox before. To me, the outcome still depends on whether AI can get superhuman [1] (and beyond) at some point. If it can, then, well, we will likely indeed see that suitable-for-human areas of the intellectual labor are shrinking. If it cannot, then it becomes an even more philosophical question similar to the agnosticism beliefs. Is the universe completely knowable? Because if it's not, then we might as well have an infinite more hard problems, and AI just rises a bar for what we can achieve by paring a human with AI compared to just human alone.

[1] I know it's a bit hard to define, but I'd vaguely say that it's significantly better in the majority of intelligence areas than the vast majority of the population. Also it should be scalable. If we can make it slightly better than human by burning the entire Earth's energy, then it doesn't make much sense.

ololobus commented on AI Is Dehumanization Technology   thedabbler.patatas.ca/pag... · Posted by u/smartmic
bilbo0s · 6 months ago
I don't know man?

Gonna have to disagree there. A lot of models are being used to reallocate cognitive burden.

A phd level biologist with access to the models we can envision in the future will probably be exponentially more valuable than entire bio startups are today. This is because s/he will be using the model to reallocate cognitive burden.

At the same time, I'm not naive. I know that there will be many, many non phd level biologist wannabes that attempt to use models to remove entirely cognitive burden. But what they will discover is that they are unable to hold a candle to the domain expert reallocating cognitive burden.

Models don't cause cognitive decline. They make cognitive labor exponentially more valuable than it is today. With the problem being that it creates an even more extreme "winner take all" economic environment that a growing population has to live in. What happens when a startup really only needs a few business types and a small team of domain experts? Today, a successful startup might be hundreds of jobs. What happens when it's just a couple dozen? Or not even a dozen? (Other than the founders and investors capturing even more wealth than they do presently.)

ololobus · 6 months ago
I'd totally agree with this point if we assume that efficiency/performance growth will flatten at some point. For example, if it gets logarithmic soon, then the progress will grow slowly over the next decades. And then, yes, it will likely look like that current software developers, engineers, scientists, etc., just got an enormously powerful tool, which knows many languages almost perfectly and _briefly_ knows the entire internet.

Yet, if we trust all these VC-backed AI startups and assume that it will continue growing rapidly, e.g., at least linearly, over the next years, I'm afraid that it may indeed reach a superhuman _intelligence_ level (let's say p99 or maybe even p999 of the population) in most of the areas. And then why do you need this top of the notch smart-ass human biologist if you can as well buy a few racks of TPUs?

ololobus commented on Merlin Bird ID   merlin.allaboutbirds.org/... · Posted by u/twitchard
ololobus · 6 months ago
Love it. I do very occasional birdwatching, so I still don’t know most of the birds I meet. What I like about Bird ID is that when I see in binoculars a singing bird I can quickly identify it, check photos, and really confirm that it’s exactly that bird.

I’ve heard from more experienced birdwatchers that it can false identify in some cases, so I always try to confirm visually, but anyway, for my casual use it’s more than accurate enough.

ololobus commented on Why old games never die, but new ones do   pleromanonx86.wordpress.c... · Posted by u/airhangerf15
ololobus · 7 months ago
The title and overall ‘take’ are very broad, it starts with

> It’s well known that video games today are disposable pieces of slop.

But then it falls mostly into multiplayer games. For the latter, I will probably agree that old multiplayer games were more decentralized and self-sufficient just because distribution was also less centralized back then.

Yet, overall, I tend to disagree because of several reasons:

1. Video games market is vastly larger than 20-30 years ago. That’s why we see more crappy games, but there many-many good games as well

2. Back then there were bad games as well. YouTube is full of videos where gamers walkthrough some old games. And many of even popular titles are literally a broken piece of crappy tech demo with broken mechanics, soft locks, bugs, etc.

3. Outside of MMMO, F2P and multiplayer there numerous great games nowadays. Indie developers are very strong. Games like Buldur’s Gate 3 have a non-imaginable quality and amount of content for 2000s game industry. It’s a matter of personal choice, but I can name dozens of titles for the past 10 years or so, that are really great.

UPD: formatting

ololobus commented on Boxie – an always offline audio player for my 3 year old   mariozechner.at/posts/202... · Posted by u/badlogic
ololobus · 8 months ago
I was wondering how cartridges are designed and I think it’s a very simple and elegant design — just wire the standard microSD card. Otherwise, I love such cozy projects. Even if they are not that efficient, they solve the problem and bring joy into someone’s life, both author’s and a small user (in this case).
ololobus commented on Android phones will soon reboot themselves after sitting unused for three days   arstechnica.com/gadgets/2... · Posted by u/namanyayg
greatgib · 8 months ago
It's good to have an option like that, even being a default, but there definitively need a switch to disable that if it is your own will.

It's not even necessarily that good enough against cops, because in a lot of shitty countries, even some pretending to be democratics, not disclosing or at least inputting your password might be a crime severely punished. If I'm not wrong, there was a guy that had to stay years in jail until he would comply with the judge order to unlock his device.

ololobus · 8 months ago
I can only second this. I have an old iPhone with a second sim-card, because I need it from time to time. And Apple introduced this auto-reboot a bit earlier, iirc last year. The problem is that after rebooting it also disconnects from wifi, so e.g. SMS/handoff synchronization stops working until you enter a passcode. This is very annoying because it was very convenient for me to receive calls/SMS to my main iPhone.

It’s a good and reasonable feature, especially if for some reason you are afraid of state or security agencies in a place where you live, or maybe during travel. It’s still questionable, because in some states you can indeed go to jail if you don’t unlock. Yet, I really want to be able to turn it off for use-cases like mine.

ololobus commented on Show HN: MCP-Shield – Detect security issues in MCP servers   github.com/riseandignite/... · Posted by u/nick_wolf
jason-phillips · 8 months ago
> People have been struggling with securing against SQL injection attacks for decades.

Parameterized queries.

A decades old struggle is now lifted from you. Go in peace, my son.

ololobus · 8 months ago
> Parameterized queries.

Also happy to be wrong, but in Postges clients, parametrized queries are usually implemented via prepared statements, which do not work with DDL on the protocol level. This means that if you want to create a role or table which name is a user input, you have a bad time. At least I wasn’t able to find a way to escape DDL parameters with rust-postgres, for example.

And because this seems to be a protocol limitation, I guess the clients that do implement it, do it in some custom way on the client side.

ololobus commented on Ask HN: Would you recommend a framework laptop?    · Posted by u/ramon156
ololobus · a year ago
I got mine at the end of 2021 and then used it till the mid-2023.

I know that it’s not a fair comparison, but I still compare it to macbooks because I’m a mac user for years.

Pros

- Linux support is amazing, basically you just install one of the popular distros and ‘it works’ (c). I used PopOS and was pretty happy. You also get all the Linux tools like eBPF out of the box, which is +1 compared to mac.

- Extensibility is a big deal. You can get 1 TB / 32 GB version for pennies compared to mac, where upgrades from the base are ridiculously expensive.

- Design and look is very neat.

- Keyboard is a classic one and also good.

Cons

- Battery life is really bad; same with cooling. At some point I started having more meetings at work and it gets extremely hot, noisy, and dies very quickly.

- Touchpad is just subpar to mac. Also chassis rigidity is meh. I know they improved the display cover design (switched to CNC), but I have the first revision.

- Display is 2K’ish. I don’t really understand, why they go with this resolution. Even their new display is around 2.5K. IMO, Linux works best either with 1080p/1K or 4K with x2 scaling (I prefer the latter) because fractional scaling is bad. I struggled a lot with external 4K monitor because it was nearly impossible to adjust all sizes so texts were good on both and especially when you disconnect and go portable. I know it’s Linux and you can DIY everything, but for me it was just too much of a headache.

I still fully support this company and wish them all the best, but since getting the MacBook Pro 14 with M2 (company’s, not personal) in the mid-2023 my Framework is waiting for two things: i) 4K display module; and ii) ARM main board. If they release these upgrades I will jump into Framework right away and give it another try.

So I recommend it if Pros are more important than Cons for you.

UPD: formatting and conclusion

ololobus commented on Please stop the coding challenges   blackentropy.bearblog.dev... · Posted by u/CrazyEmi
ololobus · a year ago
I hear it all the time ‘coding interviews are useless’, ‘peer review in scientific journals is broken’ and so on and so on.

I’d say yes and no.

Yes, these are the problems that cannot be solved perfectly.

No, because in such areas any ‘reasonable’ filter is better than nothing. People say that these assignments don’t have anything with reality, but, well, we don’t have months to try to work with each other, we only have 3x1h.

I worked as individual contributor for years, but also had a chance to try a hiring manager role for the past 3 years a lot. We do standard leetcode-style interview (without hardcore) + system design. And I always consider both as a starter and bite to see how the candidate behaves; talks; do they ask questions to clarify something and how. And I always try to help if I see that candidate is stuck. By the end of all interviews you will have some signal, not a comprehensive personality profile. Do we do mistakes? I’m pretty sure, yes. But I think it just works statistically.

u/ololobus

KarmaCake day38April 21, 2015View Original