Readit News logoReadit News
hackrmn commented on Libre – An anonymous social experiment without likes, followers, or ads   libreantisocial.com... · Posted by u/rododecba
hackrmn · 11 hours ago
Well, it only took about 15 hours for the site to be discovered by the Internet Troll Federation and be absolutely stuffed with slurs and hate memes.

Which was to be expected, frankly.

Not sure at which point the site allowed HTML, but for the first couple of hours or so there was only text for thoughts there, then the "hypermedia" of the kind described above, appeared, rendering the site useless. Just wait until someone discovers even better means to weaponize the HTML post feature -- there's bound to be galore of client-side vulnerabilities, I expect, if only due to Unicode possibilities...

hackrmn commented on The Death of the User Interface   gist.github.com/0xs34n/a5... · Posted by u/seanseany
hackrmn · a day ago
While an alluring title, and commendable argument, I do not necessarily think this is about computer or user interfaces. Ok, so I, instead of navigating Finder, or Windows Explorer, or something analogous where I have an idea in my head of where my _files_ (a tangible, while "transferred" concept) are, ask an AI agent "list files I was working on last week". This does not imply a computing problem unless by "computing" you mean I am unable to organize myself and my head does not compute -- I either know where my files are, or I don't. An AI complements my memory here, this isn't a UI problem, it's just me getting lazier or more complacent, like someone with "functioning depression" who gets by at the cost of living in a "functioning mess". I am not sure I would want to hold myself to such a standard. If this was only about efficiency -- sure, let the AI complement myself, but this starts to look like explaining sheer lack of organizaiton and calling it "the end of tooling" (because I can't or won't use the tooling anyway, too much hassle). Meaning that us advancing into a WALL-E (or "Idiocracy") age where we just voice commands to AIs while we literally lie on a bed -- is that a good thing?

Let me try to clarify and simplify my rambling: I would like AI to help me, I do, but I should know where my project files are, should I not? If our memory isn't needed, I am sure evolution will shrink it. Then we put an "AI" in our brain to remember for us, and the circle is complete?

hackrmn commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
jacobaustin123 · 4 days ago
Shamelessly responding as the author. I (mostly) agree with you here.

> please be surgically precise with your terms

There's always a tension between precision in every explanation and the "moral" truth. I can say "a SIMD (Single Instruction Multiple Data) vector unit like the TPU VPU with 32 ALUs (SIMD lanes) which NVIDIA calls CUDA Cores", which starts to get unwieldy and even then leaves terms like vector units undefined. I try to use footnotes liberally, but you have to believe the reader will click on them. Sidenotes are great, but hard to make work in HTML.

For terms like MXU, I was intending this to be a continuation of the previous several chapters which do define the term, but I agree it's maybe not reasonable to assume people will read each chapter.

There are other imprecisions here, like the term "Warp Scheduler" is itself overloaded to mean the scheduler, dispatch unit, and SIMD ALUs, which is kind of wrong but also morally true, since NVIDIA doesn't have a name for the combined unit. :shrug:

I agree with your points and will try to improve this more. It's just a hard set of compromises.

hackrmn · 4 days ago
I appreciate your response. I made a point of not revising my comment after posting it and finding in a subsequent parable the following, quoting:

> Each SM is broken up into 4 identical quadrants, which NVIDIA calls SM subpartitions, each containing a Tensor Core, 16k 32-bit registers, and a SIMD/SIMT vector arithmetic unit called a Warp Scheduler, whose lanes (ALUs) NVIDIA calls CUDA Cores.

And right after:

> CUDA Cores: each subpartition contains a set of ALUs called CUDA Cores that do SIMD/SIMT vector arithmetic.

So, to your defense and my shame -- you *did* do better than I was able to infer from first glance. And I can take absolutely no issue with a piece elaborating on originally "vague" sentence later on -- we need to read top to bottom, after all.

Much of the difficulty with laying out knowledge in written word is inherent constraints like choosing between deferring detail to "further down" at the expense of giving the "bird's eye view". I mean there is a reason writing is hard, technical writing perhaps more so, in a way. You're doing much better than a lot of other stuff I've had to learn with, so I can only thank you to have done as much as you already have.

To be more constructive still, I agree the border between clarity and utility isn't always clearly drawn. But I think you can think of it as a service to your readers -- go with precision I say -- if you really presuppose the reader should know SIMD, chances are they are able to grok a new definition like "SIMD lane" if you define it _once_ and _well_. You don't need to be "unwieldy" in repetition -- the first time may be hard but you only need to do it once.

I am rambling. I do believe there are worse and better ways to impart knowledge of the kind in writing, but I too obviously don't have the answers, so my criticism was in part inconstructive, just a sheer outcry of mild frustration once I started conflating things from the get go but before I decided to give it a more thorough read.

One last thing though: I always like when a follow-up article starts with a preamble along of "In the previous part of the series..." so new visitors can simultaneously become aware there's prior knowledge that may be assumed, _and_ navigate their way to desired point in the series, all the way to the start perhaps. That frees you from e.g. wanting to annotate abbreviations in every part, if you want to avoid doing that.

hackrmn commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
gregorygoc · 5 days ago
It’s mind boggling why this resource has not been provided by NVIDIA yet. It reached the point that 3rd parties reverse engineer and summarize NV hardware to a point it becomes an actually useful mental model.

What are the actual incentives at NVIDIA? If it’s all about marketing they’re doing great, but I have some doubts about engineering culture.

hackrmn · 4 days ago
Plenty of circumstantial evidence pointing to the fact NVIDIA prefers to hand out semi-tailored documentaion resources to signatories and other "VIPs", if not the least to exert control over who and how uses their products. I wouldn't put it past them to routinely neglect their _public_ documentation, for one reason or another that makes commercial sense to them but not the public. As for incentives, go figure indeed -- you'd think by walling off API documentation, they're shooting themselves in the feet every day, but in these days of betting it all on AI, which means selling GPUs, software and those same NDA-signed VIP-documentation articles to "partners", maybe they're all set anyway and care even less for the odd developer who wants to know how their flagship GPU works.
hackrmn commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
tormeh · 4 days ago
I find it very hard to justify investing time into learning something that's neither open source nor has multiple interchangeable vendors. Being good at using Nvidia chips sounds a lot like being an ABAP consultant or similar to me. I realize there's a lot of money to be made in the field right now, but IIUC historically this kind of thing has not been a great move.
hackrmn · 4 days ago
I grew up learning programming on a genuine IBM PC running MS-DOS, neither of which was FOSS but taught me plenty that I routinely rely on today in one form or another.
hackrmn commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
hackrmn · 4 days ago
I find the piece, much like a lot of other documentation, "imprecise". Like most such efforts, it likely caters to a group of people expected to benefit from being explained what a GPU is, but it fumbles it terms, e.g. (the first image with burned-in text):

> The "Warp Scheduler" is a SIMD vector unit like the TPU VPU with 32 lanes, called "CUDA Cores"

It's not clear from the above what a "CUDA core" (singular) _is_ -- this is the archetypical "let me explain things to you" error most people make, in good faith usually -- if I don't know the material, and I am out to understand, then you have gotten me to read all of it but without making clear the very objects of your explanation.

And so, for these kind of "compounding errors", people who the piece was likely targeted at, are none the wiser really, while those who already have a good grasp of the concepts attempted explained, like what a CUDA core actually is, already know most of what the piece is trying to explain anyway.

My advice to everyone who starts out with a back of envelope cheatsheet then decides to publish it "for the good of mankind", e.g. on Github: please be surgically precise with your terms -- the terms are your trading cards, then come the verbs etc. I mean this is all writing 101, but it's a rare thing, evidently. Don't mix and match terms, don't conflate them (the reader will do it for you many times over for free if you're sloppy), and be diligent with analogies.

Evidently, the piece may have been written to help those already familiar with TPU terminology -- it mentions "MXU" but there's no telling what that is.

I understand I am asking for a tall order, but the piece is long and all the effort that was put in, could have been complemented with minimal extra hypertext, like annotated abbreviations like "MXU".

I can always ask $AI to do the equivalent for me, which is a tragedy according to some.

hackrmn commented on Perplexity offers to buy Google Chrome for $34.5B   theverge.com/news/758218/... · Posted by u/ndr
friendzis · 11 days ago
Web browsers from 90s can render html perfectly well.

> if a website cannot "do" it, you as a user of the site, won't be able to experience it

Ever heard of native applications? Those could always do the thing, there is not only no reason for web browsers to implement "web apis", but every one of those is actively harmful.

When "web developers" can finally implement a page where focus does not jump around and layouts do not shift around we can start talking about being allowed access to more than plain html.

hackrmn · 10 days ago
Native applications is a relatively fragmented market of different hardware and OS for platform, made more complicated by relative lack of interest (which is because the market is fragmented, a catch-22), and factors like needing to learn another programming language when you already know JavaScript and how it works on the Web, which is taught to more people every year for obvious reasons. Which is all why Github Electron, essentially a Google Chrome married to Node.js, both _JavaScript_ platforms, made such an impact when it was released. There's zero-install on the Web, too -- just follow a link and you're surfing applications. Python+Qt applications have to be installed, even if that means downloading these -- there's plenty of hosts configured to deny the user the privileges of running software they downloaded, no matter how native and how well mannered it is otherwise. There's fewer pairs of hands on the job (part of catch-22), and there's more standards and APIs to deal with, due to the fragmentation, even for all the cross-platform offerings. All this no doubt contributes to the market staying behind the juggernaut that the Web has become.

Before you roll your eyes and label me a millennial who's not seen anything but the absolutely appalling Web applications of yesteryear, fresh off inexperienced hands of developers who think they invented caching and what not -- I started off with x86 assembler and C then C++ in early 90's, and I hold genuine interest in everything we learned since before Intel made 8088 -- but I am simply describing the reality I see, not necessarily reality I want.

You're drawing a border on water -- there's no need to "separate" the Web from native. The Web is an application platform developed from a hypertext network (the old Web I re-label for comparison's sake), and the platform has tremendous value. You need to have tunnel vision to want to put genie back into the bottle, but again -- I absolutely hear and understand your argument. Do you have realistic suggestions?

Drew DeVault suggested another protocol, Gemini, a while back, having become frustrated with much the same observation you did. Just text mark-up served with efficient text-based protocol -- essentially a regression back to HTTP and HTML anno 1995 (possibly with more semantic elements). I think it's not only a fantasy but also a poor idea -- not because it's a bad idea in itself but because it assumes there's no possibility to do any of it with today's Web, but there is -- it's just that everyone's reaching for the fancy and the flashy once they start coding. What you were referring to with "focus jump around" and "shifting layout". We're sacks of flesh driven by hormones -- that's the best reason I can give you why the same platform that allows you to slap [a HTML that's worth reading](http://motherfuckingwebsite.com/), possibly [with a simple stylesheet that does the bare minimum to improve user's experience](http://bettermotherfuckingwebsite.com) -- is _not enough_ for authors. I'd call it "author's prerogative" -- the person who pays for the domain and the hosting, wants to exercise their authoring power and gets carried away with all the bells and whistles they slap on their pages. Users pull their hair out, in silence (or mostly ignored because "do I paint the walls in your house?").

Anyway, this is getting long -- the gist of my argument is that technically the Web is capable of supporting all the static HTML without an ounce of "shitty" scripting that makes everything border on "unconsumable". You're making a "dictatorship" argument along "if you can't make good readable sites, we're going to neuter the platform". But the platform _is_ what drives adoption of the Web, I say, albeit now nearing some cancerous growth from a skeptic's perspective. And yet: fix the _content_, not the _platform_. "Native" is just a word -- there's no native, everything is translated or compiled one way or another, including JavaScript (which _I_ consider a relatively bad general purpose programming language, even under ECMA oversight which fixed a lot of its warts, admittedly). Unless you're one of those ["real programmers"](https://xkcd.com/378/).

hackrmn commented on Perplexity offers to buy Google Chrome for $34.5B   theverge.com/news/758218/... · Posted by u/ndr
Timshel · 11 days ago
> the benevolent dictator of the web

Lol it's more like a death grip since nobody can compete with their ad business model. There is almost no innovation in the browser space outside of more and more tracking ...

hackrmn · 11 days ago
I'd argue that depends on what you mean by "innovation" -- Google has been pretty busy, meaning specifically developers on their payroll, churning out more or less useful Web API implementations, certainly at a far more frantic pace than people traditionally _blamed_ browsers of yester-decade for. Nevermind that some of these APIs are more haphazardly designed than others, truth be told most of them are okay and are aptly designed so it's not a critical issue (for Web developers or Chrome's market share). Google co-authors most Web standards and implement them often _before_ the "standard" is published (for better and for worse; anti-trust allegations, I am looking at you). But they're not idle, one thing's for sure. Markedly different than how I remember Microsoft resting for months if not years on their IE laurels, like a CO2 blanket in a room that evacuated all the air.

So yeah, how would you describe this lack of innovation you're referring to?

There can always be more innovation that isn't of the sort I described above, but Web _is_ made of Web APIs -- if a website cannot "do" it, you as a user of the site, won't be able to experience it, is my crude opinion. But I'd love to hear examples to the contrary, illustrating innovation that isn't Web APIs.

Removing tab-based browsing (an anti-pattern if you ask me)? Optimizations (speed, size, etc)?

hackrmn commented on "This question has been retired"   learn.microsoft.com/en-us... · Posted by u/1970-01-01
mingus88 · 18 days ago
The fact is that hosting is a cost and businesses are always cutting costs

They don’t get to become one of the biggest and most successful companies by providing free services to legacy customers.

Personally, I keep a folder for product manuals. Anything I buy will have a PDF that I archive myself.

Need to change the oily on that generator I bought from Costco five years ago? Not going to find the docs on the web anymore but I have the PDF dated 2020 right here

Obviously that doesn’t work with a searchable software doc site with questions and answers but the fact that webpages could come and go at any time, and digital archival is far worse than clay tablets of antiquity is a lesson we all have to take to heart

hackrmn · 13 days ago
I get your point, I do. Then again, a lot can be said about the "PR value" of letting what could be miniscule (by comparison) costs of serving static hypertext, be saved in the eyes of "potential" customers. It's pennies on the dollar and I am fairly sure Microsoft would get a lot more love from the "neckbeards" who need the old-school stuff and not the toys they insist on dangling in front of our faces (which they are famous for). Remember when Microsoft tried to get everyone away from Win32 by force-feeding everyone WinRT, writing "deprecated" over the former everywhere they could on their site(s), only to witness such a shit-storm of complaint and fury from people who actually rightfully knew Win32 was, for all its faults, one of the better things Microsoft has given us (in context of Windows programming, that is)?

Speaking of Win32 but not only that, I too have started being more cautious and pedantic about user manuals and other "paraphernalia" I have to rely on, for instance Win32 API documentation which is getting more scarce to find in sufficient volume and specifically _detail_. So I download and archive it (with implied off-site backup, which I have for most of my "home directory" stuff) -- I agree the only way to guarantee access is to, well, obtain a copy of the document...

hackrmn commented on Food, housing, & health care costs are a source of major stress for many people   apnorc.org/projects/food-... · Posted by u/speckx
N_Lens · 16 days ago
The current financial system and various corporations see the majority of consumers like livestock - a kind of resource to be exploited. It's better to be free range organic though, in my humble opinion, but the majority seem to be trending towards battery/cage chickens.
hackrmn · 16 days ago
With "free range organic" are you describing people or food? As in, "it's better to be a free range organic human [vs. battery/caged livestock human-like resource for exploitation]", or "it's better to _eat_ free range organic [food]"? These are two different things, and in either case I'd argue not an option for the people who struggle affording food that is health(ier) by modern standards.

u/hackrmn

KarmaCake day98September 21, 2024View Original