Readit News logoReadit News
abhorrence commented on What went wrong inside recalled Anker PowerCore 10000 power banks?   lumafield.com/article/wha... · Posted by u/walterbell
AdmiralAsshat · a month ago
Amazon sent me a recall notice about this one, indicating they had it from my purchase history, but oddly I couldn't find it in my own collection of power banks, or in the ones I gave to my wife. I'm worried I might have purchased one for another family member as a gift and not remembered who.

The recall is concerning, especially since once they started with the one, they quickly added several more to the list. I've ordered at least 17 Anker products over the last ten years (not all of them power banks). I pay the premium over cheaper external batteries, and I have advised my family in the past to do the same. This is ostensibly because they are supposed to be the guys that don't explode. If I can't even take that for granted, then there's really no reason to maintain customer loyalty. There are countless other, cheaper brands available online from no-name Chinese companies.

abhorrence · a month ago
They also do recalls. Which I’m certain is more than some cheaper no name brands do.
abhorrence commented on AI capex is so big that it's affecting economic statistics   paulkedrosky.com/honey-ai... · Posted by u/throw0101c
tsunamifury · a month ago
It continually surprises me when people are in denial like this.

Literally every profession around me is radically changing due to AI. Legal, tech, marketing etc have adopted AI faster than any technology I have ever witnessed.

I’m gobsmacked you’re in denial.

abhorrence · a month ago
Interestingly I just talked to several lawyers who were annoyed at how many mistakes were being made and how much time was being wasted due to use of LLMs. I suppose that still qualifies as radically changing — you didn’t specify for the better.
abhorrence commented on Reflections on 2 years of CPython's JIT Compiler   fidget-spinner.github.io/... · Posted by u/bratao
ecshafer · 2 months ago
Does anyone know why for example the Ruby team is able to create JITs that are performant with comparative ease to Python? They are in many ways similar languages, but Python has 10x the developers at this point.
abhorrence · 2 months ago
My complete _guess_ (in which I make a bunch of assumptions!) is that generally it seems like the Ruby team has been more willing to make small breaking changes, whereas it seems a lot like the Python folks have become timid in those regards after the decade of transition from 2 -> 3.
abhorrence commented on One Logo, Three Companies   estilofilos.blogspot.com/... · Posted by u/ghc
Etheryte · 6 months ago
Turns out they used to be one conglomerate, but World War II changed that [0]:

> The Mitsubishi Group traces its origins to the Mitsubishi zaibatsu, a unified company that existed from 1870 to 1946. The company, along with other major zaibatsu, was disbanded during the occupation of Japan following World War II by the order of the Allies. Despite the dissolution, the former constituent companies continue to share the Mitsubishi brand and trademark.

[0] https://en.wikipedia.org/wiki/Mitsubishi

abhorrence · 6 months ago
The pencil company referenced in the article does not appear to have been part of the Mitsubishi zaibatsu however.
abhorrence commented on Tech takes the Pareto principle too far   bobbylox.com/blog/tech-ta... · Posted by u/bobbylox
Justta · 7 months ago
First 20% of effort will finish 80% of the work. Second 20% effort will finish 16% of the 20% left.Totally 96% will be finished.
abhorrence · 7 months ago
I once had a PM who loved the Pareto principle a little too much, and would constantly push us to "apply it" even after we already had. I got frustrated by this and drew the graph that goes along with your sentence, showing that miraculously about 99% of the work can be done with 60% of the effort!

My PM did not take the correct lesson away from the encounter.

abhorrence commented on Upgrading Uber's MySQL Fleet   uber.com/en-JO/blog/upgra... · Posted by u/benocodes
blindriver · a year ago
You’re not renaming tables when you’re at scale.
abhorrence · a year ago
Sure you do! It's how online schema changes tend to be done, e.g. https://docs.percona.com/percona-toolkit/pt-online-schema-ch... describes doing an atomic rename as the last step.
abhorrence commented on Learning to Reason with LLMs   openai.com/index/learning... · Posted by u/fofoz
fsndz · a year ago
My point of view: this is a real advancement. I’ve always believed that with the right data allowing the LLM to be trained to imitate reasoning, it’s possible to improve its performance. However, this is still pattern matching, and I suspect that this approach may not be very effective for creating true generalization. As a result, once o1 becomes generally available, we will likely notice the persistent hallucinations and faulty reasoning, especially when the problem is sufficiently new or complex, beyond the “reasoning programs” or “reasoning patterns” the model learned during the reinforcement learning phase. https://www.lycee.ai/blog/openai-o1-release-agi-reasoning
abhorrence · a year ago
> As a result, once o1 becomes generally available, we will likely notice the persistent hallucinations and faulty reasoning, especially when the problem is sufficiently new or complex, beyond the “reasoning programs” or “reasoning patterns” the model learned during the reinforcement learning phase.

I had been using 4o as a rubber ducky for some projects recently. Since I appeared to have access to o1-preview, I decided to go back and redo some of those conversations with o1-preview.

I think your comment is spot on. It's definitely an advancement, but still makes some pretty clear mistakes and does some fairly faulty reasoning. It especially seems to have a hard time with causal ordering, and reasoning about dependencies in a distributed system. Frequently it gets the relationships backwards, leading to hilarious code examples.

abhorrence commented on Small Strings in Rust: smolstr vs. smartstring (2020)   fasterthanli.me/articles/... · Posted by u/airstrike
abhorrence · a year ago
Sadly it seems like some of the images have broken since it was originally posted. :(
abhorrence commented on NLRB judge declares non-compete clause is an unfair labor practice   nlrbedge.com/p/in-first-c... · Posted by u/lalaland1125
thegrim33 · a year ago
> "there are certain kinds of questions that are not permissible on an employment questionnaire" .. "these questions are often included in the questionnaire".

I don't follow. It's not permissible but these companies just blatantly ignore the law and ask it anyways? Or it is permissible?

abhorrence · a year ago
Presumably they ignore the law.
abhorrence commented on Falcon 2   tii.ae/news/falcon-2-uaes... · Posted by u/tosh
simonw · a year ago
The license is not good: https://falconllm-staging.tii.ae/falcon-2-terms-and-conditio...

It's a modified Apache 2 license with extra clauses that include a requirement to abide by their acceptable use policy, hosted here: https://falconllm-staging.tii.ae/falcon-2-acceptable-use-pol...

But... that modified Apache 2 license says the following:

"The Acceptable Use Policy may be updated from time to time. You should monitor the web address at which the Acceptable Use Policy is hosted to ensure that your use of the Work or any Derivative Work complies with the updated Acceptable Use Policy."

So no matter what you think of their current AUP they reserve the right to update it to anything they like in the future, and you'll have to abide by the new one!

Great example of why I don't like the trend of calling licenses like this "open source" when they aren't compatible with the OSI definition.

abhorrence · a year ago
> So no matter what you think of their current AUP they reserve the right to update it to anything they like in the future, and you'll have to abide by the new one!

I'm so curious if this would actually hold up in court. Does anyone know if there's any case law / precedence around this?

u/abhorrence

KarmaCake day852September 20, 2013View Original