Readit News logoReadit News
ycombiredd commented on AT&T, Verizon blocking release of Salt Typhoon security assessment reports   reuters.com/business/medi... · Posted by u/redman25
jtbayly · 14 hours ago
What is an LI console? Where is it installed that it has access to accomplish this?
ycombiredd · 12 hours ago
"Lawful Intercept".

Some may find this interesting https://www.fcc.gov/calea

ycombiredd commented on DNS Explained – How Domain Names Get Resolved   bhusalmanish.com.np/blog/... · Posted by u/okchildhood
ycombiredd · 3 days ago
It might be worth mentioning the concept of "stub resolver" and clarifying a bit that a nameserver is a resolver. That might be being pedantic, but thought it might be worth clarifying that the difference conceptually may just be what the particular dns server answering the query is authoritative for, if anything.

One other thing that might be worth a mention is the concept of the OS' resolver and "suffix search order", with an example of connecting (https, ping, ssh, whatever protocol) to a host using just the hostname, and the aforementioned mechanism that (probably) allows this to connect to the FQDN you want. (Also, now that I type that, do you mention "FQDN" at all? If not, maybe should.)

On that note one final thought that occurs to me is the error/confound that may occur if a hostname is entered and is not resolved, but does resolve with one of the domain suffixes attached on a retry (particularly can be confusing with a typo coupled with a wildcard A record in a domain, for example.) I recognize that the lines that look like DNS records are not explicitly stated to be in a format for any particular dns server software, and even if they were, they're snippets without larger context so we don't know what the $ORIGIN for the zone might be, an adjacent concept you might want to explore, even if just for your own edification is that of the effect of a terminating "." at the end of a hostname, either at resolution or configuration time.

Just offering feedback that might help you add to the article.

ycombiredd commented on GitHub Actions is slowly killing engineering teams   iankduncan.com/engineerin... · Posted by u/codesuki
ycombiredd · 4 days ago
I don't care if this is an advertisement for buildkite masquerading as a blog post or if this is just an honest rant. Either way, I gotta say it speaks a lot of truth.
ycombiredd commented on Mermaid ASCII: Render Mermaid diagrams in your terminal   github.com/lukilabs/beaut... · Posted by u/mellosouls
thangalin · 12 days ago
While Mermaid gets the limelight, Kroki[1] offers: BlockDiag, BPMN, Bytefield, SeqDiag, ActDiag, NwDiag, PacketDiag, RackDiag, C4 with PlantUML, D2, DBML, Ditaa, Erd, Excalidraw, GraphViz, Nomnoml, Pikchr, PlantUML, Structurizr, Svgbob, Symbolator, TikZ, Vega, Vega-Lite, WaveDrom, WireViz, and Mermaid.

My Markdown editor, KeenWrite[2], integrates Kroki as a service. This means whenever a new text-based diagram format is offered by Kroki, it is available to KeenWrite, dynamically. The tutorial[3] shows how it works. (Aside, variables within diagrams are also possible, shown at the end.)

Note that Mermaid diagrams cannot be rendered by most libraries[4] due to its inclusion of <foreignObject>, which is browser-dependent.

[1]: https://kroki.io/

[2]: https://keenwrite.com/

[3]: https://www.youtube.com/watch?v=vIp8spwykZY

[4]: https://github.com/orgs/mermaid-js/discussions/7085

ycombiredd · 12 days ago
Tangentially related, I once wanted to render a NetworkX DAG in ASCII, and created phart to do so.

There's an example of a fairly complicated graph of chess grandmaster PGM taken from a matplotlib example from the NetworkX documentation website, among some more trivial output examples in the README at https://github.com/scottvr/phart/blob/main/README.md#example...

(You will need to expand the examples by tapping/clicking on the rightward-facing triangle under "Examples", so that it rotates to downward facing and the hidden content section is displayed)

ycombiredd commented on What came first: the CNAME or the A record?   blog.cloudflare.com/cname... · Posted by u/linolevan
bwblabs · 22 days ago
I'm not sure, but we're seeing this specifically with _dmarc CNAMEing to '.hosted.dmarc-report.com' together with a TXT record type, also see this discussion users asking for this at deSEC: https://talk.desec.io/t/cannot-create-cname-and-txt-record-f...

My main point was however that it's really not okay that CloudFlare allows setting up other record types (e.g. TXT, but basically any) next to a CNAME.

ycombiredd · 21 days ago
Yes. This type of behavior was what I was referring to in an earlier comment mentioning flashbacks to seeing logs from named filled with "cannot have cname and other data", and slapping my forehead asking "who keeps doing this?", in the days when editing files by hand was the norm. And then, of course having repeats of this feeling as tools were built, automations became increasingly common, and large service providers "standardized" interfaces (ostensibly to ensure correctness) allowing or even encouraging creation of bad zone configurations.

The more things change, the more things stay the same. :-)

ycombiredd commented on What came first: the CNAME or the A record?   blog.cloudflare.com/cname... · Posted by u/linolevan
colmmacc · 21 days ago
I am very petty about this one bug and have a very old axe to grind that this reminded me of! Way back in 2011 CloudFlare launched an incredibly poorly researched feature to just return CNAME records at a domain apex ... RFCs be damned.

https://blog.cloudflare.com/zone-apex-naked-domain-root-doma... , and I quote directly ... "Never one to let a RFC stand in the way of a solution to a real problem, we're happy to announce that CloudFlare allows you to set your zone apex to a CNAME."

The problem? CNAMEs are name level aliases, not record level, so this "feature" would break the caching of NS, MX, and SOA records that exist at domain apexes. Many of us warned them at the time that this would result in a non-deterministic issue. At EC2 and Route 53 we weren't supporting this just to be mean! If a user's DNS resolver got an MX query before an A query, things might work ... but the other way around, they might not. An absolute nightmare to deal with. But move fast and break things, so hey :)

In earnest though ... it's great to see how now CloudFare are handling CNAME chains and A record ordering issues in this kind of detail. I never would have thought of this implicit contract they've discovered, and it makes sense!

ycombiredd · 21 days ago
You just caused flashbacks of error messages from BIND of the sort "cannot have CNAME and other data", from this proximate cause, and having to explain the problem many, many times. Confusion and ambiguity of understandings have also existed since forever by people creating domain RR's (editing files) or the automated or more machined equivalents.

Related, the phrase "CNAME chains" causes vague memories of confusion surrounding the concepts of "CNAME" and casual usage of the term "alias". Without re-reading RFC1034 today, I recall that my understanding back in the day was that the "C" was for "canonical", and that the host record the CNAME itself resolved to must itself have an A record, and not be another CNAME, and I acknowledge the already discussed topic that my "must" is doing a lot of lifting there, since the RFC in question predates a normative language standard RFC itself.

So, I don't remember exactly the initial point I was trying to get at with my second paragraph; maybe there has always been some various failure modes due to varying interpretations which have only compounded with age, new blood, non-standard language being used in self-serve DNS interfaces by providers, etc which I suppose only strengthens the "ambiguity" claim. That doesn't excuse such a large critical service provider though, at all.

ycombiredd commented on Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity   arxiv.org/abs/2510.01171... · Posted by u/ycombiredd
ycombiredd · 23 days ago
So, I posted this link. I actually did so assuming it likely already had already been submitted, and I wanted to discuss this with people more qualified and educated in the subject than I. The authors of this paper are definitely more qualified to publish such a paper than I am; I'm not an ML scientist and I am not trying to pose as one. The paper made me feel a sort of way, and caused a bunch of questions to come to mind I didn't find answers to in the paper but, as I'm willing to suppose, maybe I'm not even qualified to read such a paper. I considered messaging the authors someplace like Twitter or in review/feedback on the Arxiv submission (which I probably don't have access to do with my user anyway, but I digress.) I decided that might make me seem like a hostile critic, or maybe likely, I'd just come off as an unqualified idiot.

So... HN came quickly to mind as a place where I can share a thought, considered opinion, ask questions, with potential to have them be answered by very smart and knowledgeable folks on a neutral ground. If you've made it this far into my comment, I already appreciate you. :)

Ok so... I've already disclaimed any authority, so I will get to my point and see what you guys can tell me. I read the paper (it is 80+ pages, so admittedly I skimmed some math, but also re-read some passages to feel more certain that I understood what they are saying.

I understand the phenomenon, and have no reason to doubt anything they put in the paper. But, as I mentioned, while reading it I had some intangible gut "feelings" that seeing that they have math to back what they're saying could not resolve for me. Maybe this is just because I don't understand the proofs. Still, I realized when I stopped reading at it that it actually wasn't anything that they said, it was what it seemed to my naive brain was not said, and I felt like it should have been.

I'll try to get to the point. I completely buy that reframing prompts can reduce mode collapse. But, as I understand it, the chat interface in front of the backend API of any LLM tested does not have insight into logits, probs, etc. The parameters passed by the prompt request, and the probabilities returned with the generations (if asked for by the API request) do not leak, are not provided in the chat conversation context in any way, so that when you prompt an LLM to return a probability, it's responding with, essentially, the language about probabilities it learned during its training, and it seems rather unlikely that many training datasets contain actual factual information about their own contents' distributions for the model during training or RLHF to "learn" any useful probabilistic information about its own training data.

So, a part of the paper I re-read more than once says at one point (in 4.2): "Our method is training-free, model-agnostic, and requires no logit access." This statement is unequivocally obviously true and honest, but - and I'm not trying to be rude or mean, I just feel like there is something subtle I'm missing or misunderstanding - because, said another way, that statement could also be true and honest if it said "Our method has no logit access, because the chat interface isn't designed that way", and here's what immediately follows then in my mind, which is "the model learned how humans write about probabilities and will output a number that may be near to (or far away from) the actually prob of the token/word/sentence/whathaveyou, and we observed that if you prompt the model in a way that causes it to output a number that looks like a probability (some digits, a decimal somewhere), along with the requested five jokes, it has an effect on the 'creativity' of the list of five jokes it gives you."

So, naturally, one wonders what, if any actual correlation there is between the numbers the LLM generates as "hallucinated" (I'm not trying to use the word in a loaded way; it's just the term that everyone understands for this meaning, with no sentiment behind my usage here) probabilities for the jokes it generated, and the actual probabilities thereof. I did see that they measured empirical frequencies of generated answers across runs and compared that empirical histogram to a proxy pretraining distribution, and that they acknowledge that they did no comparison or correlation of the "probabilities" output by the model, and they clearly state it. So without continuing to belabor that point, this is probably core to my confusion about the framing of what the paper says that the phenomenon indicates.

It is hard for me to stop asking all the slight variations on these questions that lead me to write this, but I will stop, and try to get to a TL;DR I think dear HN readers may appreciate more than my exposition of befuddlement bordering on dubiousness:

I guess the TLDR of my comment is that I am curious if the authors examined any relationship between the LLM verbalized "probabilities" and actual model sampling likelihoods (logprobs or selection frequency). I am not convinced that the verbalized "probabilities" themselves are doing any work other than functioning as token noise or prompt reframing.

I didn't see a control for, or even a comparison to/against multi-slot prompts with arbitrary labels or non-semantic "decorative" annotation. In my experience poking and prodding LLMs as a user, desiring to influence generations in specific and sometimes unknown ways, even lightweight slotting without probability language substantially reduces repetition, which makes me wonder how much of the gain from VS is attributable to task reframing, as opposed to the probability verbalization itself.

This may not even be a topic of interest for anyone, and maybe nobody will even see my comment/questions, so I'll stop for now... but if anyone has insights, clarifications, or can point out where I'm being dense, I actually have quite a bit more to say and ask about this paper.

I can't really explain why I just had to see if I could get another insightful opinion on this paper (I usually don't have such a strong reaction when reading academic papers I may not fully understand, but there's some gap in my knowledge (or less likely, there's something off about the framing of the phenomenon described), and it's causing me to really hope for discussion, so I can ask my perhaps even less-qualified questions pertaining to what boils down to mostly just my intuition (or maybe incomprehension. Heh.)

Thanks so much if you've read this and even more if you can talk to me about what I've used too many words to try to convey here.

ycombiredd commented on Former NYC Mayor Eric Adams rugs his own memecoin just 30 minutes after launch   old.reddit.com/r/CryptoCu... · Posted by u/pulisse
panja · a month ago
Yeah but I doubt it. These people have PR teams and could have easily released a statement if this was fake.
ycombiredd · a month ago
Yeah, just following up to my grandparent comment to say "wow. Holy shit. It is how it looks." I'm not sure why I was surprised; maybe I'm an optimist, or as I suggested in my first comment, a bit naive.

In my defense, I don't think I'm stupid; I just don't want to believe so many people in power are cartoonishly evil, so I tend to look for explanations that don't require it. I think my internal sense of the world wants there to be a distinction between, say, average cryptoscammer evil buffoonery and the people in positions where at least ostensibly they try to present as a good guy while trying to keep their evildoings secret. This story gives me some sort of cognitive dissonance, and while reflecting on that fact, I get a bit sad. This world is bonkers.

ycombiredd commented on Former NYC Mayor Eric Adams rugs his own memecoin just 30 minutes after launch   old.reddit.com/r/CryptoCu... · Posted by u/pulisse
ycombiredd · a month ago
Maybe I am missing something or am just naive, but isn't it fairly common for social media accounts of well-known figures to be taken over (hacked/phished/whatever) for the purpose of shilling some crypto scam? Launching a memecoin and then very quickly (30 min later, apparently) rugpulling seems like it would at least as likely fit that type of scam as it would being one where the public figure themselves is actually behind the scam.

Not making a claim as to what is actually true, just positing explanations. Heck, maybe plot twist: it is actually Eric Adams behind it, but the "account takeover" possibility was planned to serve as plausible deniability.

You know... like "an actor that's playing a dude, disguised as another dude" type thing.

u/ycombiredd

KarmaCake day256June 14, 2022
About
github.com/scottvr HN @ paperclipmaximizer dot ai
View Original