Readit News logoReadit News
Difwif commented on WorldGen – Text to Immersive 3D Worlds   meta.com/en-gb/blog/world... · Posted by u/smusamashah
Difwif · a month ago
This just seems like an engineered pipeline of existing GenAI to get a 3d procedurally generated world that doesn't even look SOTA. I'm really sorry to dunk on this for those that worked on it, but this doesn't look like progress to me. The current approach looks like a dead end.

An end-to-end _trained_ model that spits out a textured mesh of the same result would have been an innovation. The fact that they didn't do that suggests they're missing something fundamental for world model training.

The best thing I can say is that maybe they can use this to bootstrap a dataset for a future model.

Difwif commented on USDA head says 'everyone' on SNAP will now have to reapply   thehill.com/homenews/admi... · Posted by u/sipofwater
qgin · a month ago
Anyone who has ever worked on any sort of sales funnel knows: every time you ask someone to take an additional action, you lose people. Ask everybody to reapply, you'll end up with fewer people. You can say that's evidence of previous fraud, but it's largely just going to be people who didn't make it through the additional friction.
Difwif · a month ago
[removed]
Difwif commented on First recording of a dying human brain shows waves similar to memory flashbacks (2022)   louisville.edu/medicine/n... · Posted by u/thunderbong
squarefoot · 2 months ago
Most bullies just vent out what they suffer at home, school or workplace. They already punish themselves by not reacting against the real source of their problems.
Difwif · 2 months ago
A valid rationalization but never an excuse. At some point the buck has to stop being passed around. Standing up to all instances of violence is the only way to stop the endless cycles.
Difwif commented on Uv is the best thing to happen to the Python ecosystem in a decade   emily.space/posts/251023-... · Posted by u/todsacerdoti
embe42 · 2 months ago
You might be interested in pixi, which is roughly to conda as uv is to pip (also written in Rust, it reuses the uv solver for PyPI packages)
Difwif · 2 months ago
Pixi has also been such a breathe of fresh air for me. I think it's as big of a deal as UV (It uses UV under the hood for the pure python parts).

It's still very immature but if you have a mixture of languages (C, C++, Python, Rust, etc.) I highly recommend checking it out.

Difwif commented on Asahi Linux Still Working on Apple M3 Support, M1n1 Bootloader Going Rust   phoronix.com/news/Asahi-L... · Posted by u/LorenDB
prmoustache · 2 months ago
>Loosing a day of productivity fixing drivers and patching kernels gets old.

You are talking like it was 1997.

The typical linux users don't have to do that. Only those who buy unsupported devices on purpose for the challenge to make them work.

Difwif · 2 months ago
Well you should tell that to Dell because I have coworkers with a range of their models that are constantly fighting with webcams, audio, bluetooth, wifi, and Nvidia driver updates.
Difwif commented on Asahi Linux Still Working on Apple M3 Support, M1n1 Bootloader Going Rust   phoronix.com/news/Asahi-L... · Posted by u/LorenDB
ActorNightly · 2 months ago
>I'm not really sure what to do.

Get any of the modern laptops with good battery life, install linux + Elementary OS without any hacks or workarounds (or better yet, i3wm which is the best window manager for laptops), and never look back.

Or do what I do, which is buy $200 dells/thinkpads of ebay, and for anything requiring CPU, just ssh into your home server.

Personally I went a step further and use a lapdock with a samsung phone - acts like a laptop with Termux, and I can do pretty much everything with good battery life, because lapdock battery also charges the phone.

Difwif · 2 months ago
I used to be in this camp until I tried and bought an M1 Macbook as my daily driver. I thought I was going to be Thinkpad/XPS w/ Linux until I die. I don't love MacOS but POSIX is mostly good enough for me and the hardware is so good that I'm willing to look past the shortfalls.

Seriously I would love to switch back to a full-time Linux distro but I'm more interested in getting work done and having a stable & performant platform. Loosing a day of productivity fixing drivers and patching kernels gets old. The M-series laptops have been the perfect balance for me so far.

Difwif commented on Addendum to GPT-5 system card: GPT-5-Codex   openai.com/index/gpt-5-sy... · Posted by u/wertyk
Difwif · 3 months ago
Is this available to use now in Codex? Should I see a new /model?
Difwif commented on Microsoft is officially sending employees back to the office   businessinsider.com/micro... · Posted by u/alloyed
dragonwriter · 3 months ago
RTO mandates are about many things, but actual business value of being in the office to the business doing the mandate is low on the list. Among the things it is about:

(1) Executives with emotional attachment to certain leadership styles that are enabled by physical presence,

(2) Interest in the investor class for the commercial real estate market. The business impacted may not be invested in it, but the businesses’ shareholders in sufficient numbers probably are, and so are the influential constituents of the politicians they want favors from, in a time of increasingly naked political corruption and cronyism.

(3) Backdoor layoffs. RTO is unpopular with large swathes of the work force, and people will quit because of it. That’s good for a firm likely to be cutting positions anyway; there’s no need for severance, regardles of scale there’s no WARN Act notice requirement, and if you still have to cut more positions afterwards, it makes it less likely that those cuts will hit WARN Act thresholds. And while the people that quit may not be the ones it would be your first choice to cut, they are the ones that would be most likely to quit in the kind of less-employee-friendly and financially leaner (in real terms) times likely to exist for a while after cuts.

Difwif · 3 months ago
(2) Seems like a media narrative rather than truth. I don't think that would be anywhere remotely high on a CEO's priority list unless they were a commercial real estate company.

It's far more likely a mixture of (1) and actual results - in-person/hybrid teams produce better outcomes (even if why that's true hasn't been deeply evaluated or ultimately falls on management)

Difwif commented on Why language models hallucinate   openai.com/index/why-lang... · Posted by u/simianwords
didibus · 3 months ago
When tuning predictive models you always have to balance precision and recall because 100% accuracy is never going to happen.

In LLMs that balance shows up as how often the model hallucinates versus how often it says it doesn’t know. If you push toward precision you end up with a model that constantly refuses: What’s the X of Y? I don’t know. Can you implement a function that does K? I don’t know how. What could be the cause of G? I can’t say. As a user that gets old fast, you just want it to try, take a guess, let you be the judge of it.

Benchmarks and leaderboards usually lean toward recall because a model that always gives it a shot creates a better illusion of intelligence, even if some of those shots are wrong. That illusion keeps users engaged, which means more users and more money.

And that's why LLM hallucinates :P

Difwif · 3 months ago
It would be interesting to see two versions of a model. A primary model tuned for precision that's focused on correctness that works with or orchestrates a creative model that's tuned for generating new (and potentially incorrect) ideas. The primary model is responsible for evaluating and reasoning about the ideas/hallucinations. Feels like a left/right brain architecture (even though that's an antiquated model of human brain hemispheres).
Difwif commented on A staff engineer's journey with Claude Code   sanity.io/blog/first-atte... · Posted by u/kmelve
Aeolun · 4 months ago
> I suspect videos meeting your criteria are rare because most AI coding demos either cherry-pick simple problems or skip the messy reality of maintaining real codebases.

Or we’re just having too much fun making stuff to make videos to convince people that are never going to be convinced.

Difwif · 4 months ago
I took a quick informal poll of my coworkers and the majority of us have found workflows where CC is producing 70-99% of the code on average in PRs. We're getting more done faster. Most of these people tend to be anywhere from 5-12 yrs professional experience. There are some concerns that maybe more bugs are slipping through (but also there's more code being produced).

We agree most problems stem from: 1. Getting lazy and auto-accepting edits. Always review changes and make sure you understand everything. 2. Clearly written specification documents before starting complex work items 3. Breaking down tasks into a managable chunk of scope 4. Clean digestible code architecture. If it's hard for a human to understand (e.g: poor separation of concerns) it will be hard for the LLM too.

But yeah I would never waste my time making that video. Having too much fun turning ideas into products to care about proving a point.

u/Difwif

KarmaCake day292April 30, 2012View Original