Readit News logoReadit News
frank_nitti commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
doctorpangloss · 14 days ago
NVIDIA drivers send detailed telemetry.

Windows and macOS send detailed telemetry.

You have to install the pip packages and the models, which all come from websites, which collect detailed telemetry.

You don’t think Microsoft gathers detailed telemetry on all your interactions with GitHub?

The local setup doesn’t really help with that.

frank_nitti · 14 days ago
We might be talking about two different things. Yes, under normal circumstances the setup steps involve software that defaults to using telemetry -- though I'd be surprised if it's not possible anymore to achieve those in an air-gapped env using e.g. offline installers, zipped repos and wheel files, etc.

My comment was referring to runtime workloads having no telemetry (because I unplugged the internet)

frank_nitti commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
doctorpangloss · 16 days ago
The entire stack involved sends so much telemetry.
frank_nitti · 16 days ago
This, in particular, is a big motivator and rewarding factor in getting local setup and working. Turning off the internet and seeing everything run end to end is a joy
frank_nitti commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
sneak · 16 days ago
This writeup has nothing of the sort and is not helpful toward that goal.
frank_nitti · 16 days ago
I'd assume they are referring to being able to run your own workloads in a home-built system, rather then surrendering that ownership to the tech giants alone
frank_nitti commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
braooo · 16 days ago
Running LLMs at home is a repeat of the mess we make with "run a K8s cluster at home" thinking

You're not OpenAI or Google. Just use pytorch, opencv, etc to build the small models you need.

You don't need Docker even! You can share over a simple code based HTTP router app and pre-shared certs with friends.

You're recreating the patterns required to manage a massive data center in 2-3 computers in your closet. That's insane.

frank_nitti · 16 days ago
For me, this is essential. On priciple, I won't pay money to be a software engineer.

I never paid for cloud infrastructure out of pocket, but still became the go-to person and achieved lead architecture roles for cloud systems, because learning the FOSS/local tooling "the hard way" put me in a better position to understand what exactly my corporate employers can leverage with the big cash they pay the CSPs.

The same is shaping up in this space. Learning the nuts and bolts of wiring systems together locally with whatever Gen AI workloads it can support, and tinkering with parts of the process, is the only thing that can actually keep me interested and able to excel on this front relative to my peers who just fork out their own money to the fat cats that own billions worth of compute.

I'll continue to support efforts to keep us on the track of engineers still understanding and able to 'own' their technology from the ground up, if only at local tinkering scale

frank_nitti commented on Open music foundation models for full-song generation   map-yue.github.io/... · Posted by u/selvan
bangaladore · 17 days ago
What is the use case for music generation models? I see usecases for alot of the other foundation models like text, image, tts, sst, but why do I want AI generated music?
frank_nitti · 17 days ago
I’ve mostly used them for laughs with my friends. Sometimes generating “custom” songs with funny lyrics, but most fun so far is editing lyrics of existing songs to say ridiculous things for fun.

No real clue how someone would use them for a more serious endeavor, only thing I could imagine would be to quickly iterate/prototype with song structures on a fixed seed to generate ideas for a real composition. Consider the case of an indie game developer or film maker getting some placeholder music to test the experience during early throwaway iterations.

frank_nitti commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
sanderjd · 3 months ago
Yeah, I should have posted the first version of my post, pointing out that the problem with this demand for proof (as is often the case) devolves into boring definitional questions.

I don't understand why you think "the code needs to be audited and revised" is a failure.

Nothing in the OP relies on it being possible for LLMs to build and deploy software unsupervised. It really seems like a non sequitur to me, to ask for proof of this.

frank_nitti · 3 months ago
That’s fair regarding the OP, and if otherwise agree with your sentiments here.

Some other threads of conversation get intertwined here with concerns about delusional management making decisions to cut staff and reduce hiring for junior positions, on the strength of the promises by AI vendors and their paid/voluntary shills

For many like me who have encouraged sharp young people learn computers, we are watching their spirits crushed by this narrative and have a strong urge to push back — we still need new humans to learn how computer systems actually work, and if nobody is willing to pay them for work because an LLM outperforms them on those menial “rite-of-passage” types of software construction, we will find ourselves in a bad place

frank_nitti commented on Cloudlflare builds OAuth with Claude and publishes all the prompts   github.com/cloudflare/wor... · Posted by u/gregorywegory
nijave · 3 months ago
Isn't there some way to speed up with codegen besides using LLMs?
frank_nitti · 3 months ago
Some may have a better answer, but I often compare with tools like OpenAPI and AsyncAPI generators where HTTP/AMQP/etc code can be generated for servers, clients and extended documentation viewers.

The trade off here would be that you must create the spec file (and customize the template files where needed) which drives the codegen, in exchange for explicit control over deterministic output. So there’s more typing but potentially less cognitive overhead with reviewing a bunch of LLM output.

For this use case I find the explicit codegen UX preferable to inspecting what the LLM decided to do with my human-language prompt, if attempting to have the LLM directly code the library/executable source (as opposed to asking it to create the generator, template or API spec).

frank_nitti commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
sanderjd · 3 months ago
What kind of proof are you looking for here, exactly? Lots of businesses are successfully using AI... There are many anecdotes of this, which you can read here, or even in the article you commented on.

What else are you looking for?

frank_nitti · 3 months ago
What do you mean by “successfully using AI”, do you just mean some employee used it and found it helpful at some stage of their dev process, e.g. in lieu of search engines or existing codegen tooling?

Are there any examples of businesses deploying production-ready, nontrivial code changes without a human spending a comparable (or much greater) amount of time as they’d have needed to with the existing SOTA dev tooling outside of LLMs?

That’s my interpretation of the question at hand. In my experience, LLMs have been very useful for developers who don’t know where to start on a particular task, or need to generate some trivial boilerplate code. But on nearly every occasion of the former, the code/scripts need to be heavily audited and revised by an experienced engineer before it’s ready to deploy for real.

frank_nitti commented on Airlines are charging solo passengers higher fares than groups   thriftytraveler.com/news/... · Posted by u/_tqr3
AlotOfReading · 3 months ago
Some countries (e.g. Japan) charge per person in non-Western hotels. Even then, you may get different prices for a solo traveler because the lower overall price means they need to increase the per person rate to make enough margin with their fixed costs.
frank_nitti · 3 months ago
They do this in Mexico as well. Always just seems like an honor system thing unless they are checking people at the door to the hotel and/or room each time they enter, which only seems realistic in very small hotels who have no e.g. restaurant open to the public.

Otherwise can’t one just rent the room as a solo guest, and just have someone come through later, as long as there isn’t an obvious group activity going on inside the room?

frank_nitti commented on Show HN: 1 min workouts for people who sit all day   shortreps.com... · Posted by u/melvinzammit
melvinzammit · 3 months ago
It is a deterministic algorithm that works based on muscles used in the exercise and aiming to work all main muscle groups. I wrote the keyword AI in some places so that the average person would understand it. I might remove it.
frank_nitti · 3 months ago
When you put it like that, it sounds much more enticing to me. Don’t remove it on account of comments like mine, especially if you have reason to believe it connects with the average person you’re hoping will use it.

Since the term AI seems to be used synonymously with transformer-based generative stuff, and seems to appear in almost every software-related content these days, that’s just where my mind goes.

u/frank_nitti

KarmaCake day314April 11, 2017View Original