Readit News logoReadit News
abhorrence commented on Pebble, Rebble, and a path forward   ericmigi.com/blog/pebble-... · Posted by u/phoronixrly
mcny · a month ago
> I disagree. I’m working hard to keep the Pebble ecosystem open source. I believe the contents of the Pebble Appstore should be freely available and not controlled by one organization.

I hate to say this but I have to agree with Eric. I want to side with Rebble But they are clearly misguided. The goal should not be to have an ongoing revenue stream for Rebble.

The goals should be

If and when Eric sells out again, there is a way for

1. all pebble and core devices to continue to get updates somehow (Rebble or otherwise)

2. all apps and metadata will continue to be available somehow (Rebble or otherwise)

The otherwise is key here. If someone wants to not use Rebble, they should be able to do that.

Rebble is not the end goal. Core is not the end goal. The users are.

abhorrence · a month ago
I think your points 1 and 2 are exactly spot on. And, assuming that both Rebble's and Eric's are being relatively forthright, that Eric is the one that is actually trying to come to an agreement that accomplishes that. Whereas Rebble is taking the position of "only we can be trusted".

And with all the people replying to the original Rebble post with "I'm canceling my preorder", I'm pretty worried that Rebble has created a self-fulfilling prophecy situation. :(

abhorrence commented on Core Devices keeps stealing our work   rebble.io/2025/11/17/core... · Posted by u/jdauriemma
abhorrence · a month ago
I'm torn here. I love that Rebble folks have kept things alive. I also love that Eric underwent the effort to make new hardware.

I'm also a bit sad that this is the first we're hearing of this tension, because it likely would've changed my decision to purchase a new Core 2 Duo watch, and I would've preferred this sort of falling out happen before a lot of devices have been purchased.

abhorrence commented on Ask HN: What's a good 3D Printer for sub $1000?    · Posted by u/lucideng
cityofdelusion · 3 months ago
You are sadly getting a lot of answers completely ignoring your requirements.

A Voron or RatRig are right up your alley. They are highly customizable, buy a kit as a base, then upgrade components as needed to do more complex printing. They are completely open source and repairable with no phoning home or any other shenanigans, the GNU/Linux of 3d printers. If you have CAD and machining experience it should be fairly straight forward.

My Vorons are both extremely reliable, I just hit print for 99% of my stuff and it just works with either auto leveling or static fixed offsets (depends on the Voron chosen). If something doesn’t work out, there is an enormous community with many swappable components and the machines are upgradable year after year, or can be kept in a specific older configuration.

abhorrence · 3 months ago
As someone who owns both a couple Vorons and a couple of Bambu's printers, I do think for a lot of people the difference between the two can be "3d printers are my hobby" vs "3d printers are a tool". It's not that Vorons can't be reliable, in fact a lot of the reason why say the X1C is so reliable is because its design essentially started life as a Voron. But because you have to assemble them, they just aren't as "plug and play".
abhorrence commented on What went wrong inside recalled Anker PowerCore 10000 power banks?   lumafield.com/article/wha... · Posted by u/walterbell
AdmiralAsshat · 5 months ago
Amazon sent me a recall notice about this one, indicating they had it from my purchase history, but oddly I couldn't find it in my own collection of power banks, or in the ones I gave to my wife. I'm worried I might have purchased one for another family member as a gift and not remembered who.

The recall is concerning, especially since once they started with the one, they quickly added several more to the list. I've ordered at least 17 Anker products over the last ten years (not all of them power banks). I pay the premium over cheaper external batteries, and I have advised my family in the past to do the same. This is ostensibly because they are supposed to be the guys that don't explode. If I can't even take that for granted, then there's really no reason to maintain customer loyalty. There are countless other, cheaper brands available online from no-name Chinese companies.

abhorrence · 5 months ago
They also do recalls. Which I’m certain is more than some cheaper no name brands do.
abhorrence commented on AI capex is so big that it's affecting economic statistics   paulkedrosky.com/honey-ai... · Posted by u/throw0101c
tsunamifury · 5 months ago
It continually surprises me when people are in denial like this.

Literally every profession around me is radically changing due to AI. Legal, tech, marketing etc have adopted AI faster than any technology I have ever witnessed.

I’m gobsmacked you’re in denial.

abhorrence · 5 months ago
Interestingly I just talked to several lawyers who were annoyed at how many mistakes were being made and how much time was being wasted due to use of LLMs. I suppose that still qualifies as radically changing — you didn’t specify for the better.
abhorrence commented on Reflections on 2 years of CPython's JIT Compiler   fidget-spinner.github.io/... · Posted by u/bratao
ecshafer · 6 months ago
Does anyone know why for example the Ruby team is able to create JITs that are performant with comparative ease to Python? They are in many ways similar languages, but Python has 10x the developers at this point.
abhorrence · 6 months ago
My complete _guess_ (in which I make a bunch of assumptions!) is that generally it seems like the Ruby team has been more willing to make small breaking changes, whereas it seems a lot like the Python folks have become timid in those regards after the decade of transition from 2 -> 3.
abhorrence commented on One Logo, Three Companies   estilofilos.blogspot.com/... · Posted by u/ghc
Etheryte · 10 months ago
Turns out they used to be one conglomerate, but World War II changed that [0]:

> The Mitsubishi Group traces its origins to the Mitsubishi zaibatsu, a unified company that existed from 1870 to 1946. The company, along with other major zaibatsu, was disbanded during the occupation of Japan following World War II by the order of the Allies. Despite the dissolution, the former constituent companies continue to share the Mitsubishi brand and trademark.

[0] https://en.wikipedia.org/wiki/Mitsubishi

abhorrence · 10 months ago
The pencil company referenced in the article does not appear to have been part of the Mitsubishi zaibatsu however.
abhorrence commented on Tech takes the Pareto principle too far   bobbylox.com/blog/tech-ta... · Posted by u/bobbylox
Justta · a year ago
First 20% of effort will finish 80% of the work. Second 20% effort will finish 16% of the 20% left.Totally 96% will be finished.
abhorrence · a year ago
I once had a PM who loved the Pareto principle a little too much, and would constantly push us to "apply it" even after we already had. I got frustrated by this and drew the graph that goes along with your sentence, showing that miraculously about 99% of the work can be done with 60% of the effort!

My PM did not take the correct lesson away from the encounter.

abhorrence commented on Upgrading Uber's MySQL Fleet   uber.com/en-JO/blog/upgra... · Posted by u/benocodes
blindriver · a year ago
You’re not renaming tables when you’re at scale.
abhorrence · a year ago
Sure you do! It's how online schema changes tend to be done, e.g. https://docs.percona.com/percona-toolkit/pt-online-schema-ch... describes doing an atomic rename as the last step.
abhorrence commented on Learning to Reason with LLMs   openai.com/index/learning... · Posted by u/fofoz
fsndz · a year ago
My point of view: this is a real advancement. I’ve always believed that with the right data allowing the LLM to be trained to imitate reasoning, it’s possible to improve its performance. However, this is still pattern matching, and I suspect that this approach may not be very effective for creating true generalization. As a result, once o1 becomes generally available, we will likely notice the persistent hallucinations and faulty reasoning, especially when the problem is sufficiently new or complex, beyond the “reasoning programs” or “reasoning patterns” the model learned during the reinforcement learning phase. https://www.lycee.ai/blog/openai-o1-release-agi-reasoning
abhorrence · a year ago
> As a result, once o1 becomes generally available, we will likely notice the persistent hallucinations and faulty reasoning, especially when the problem is sufficiently new or complex, beyond the “reasoning programs” or “reasoning patterns” the model learned during the reinforcement learning phase.

I had been using 4o as a rubber ducky for some projects recently. Since I appeared to have access to o1-preview, I decided to go back and redo some of those conversations with o1-preview.

I think your comment is spot on. It's definitely an advancement, but still makes some pretty clear mistakes and does some fairly faulty reasoning. It especially seems to have a hard time with causal ordering, and reasoning about dependencies in a distributed system. Frequently it gets the relationships backwards, leading to hilarious code examples.

u/abhorrence

KarmaCake day872September 20, 2013View Original