Readit News logoReadit News
neuronexmachina commented on Is your AI system illegal in the EU?   medium.com/@lea.leumassar... · Posted by u/pbacdf
ronsor · 11 days ago
Not once the GRANITE Act arrives.
neuronexmachina · 11 days ago
As far as I'm aware that's currently just a blog post from the Kiwi Farms lawyer, not a bill.
neuronexmachina commented on Measuring political bias in Claude   anthropic.com/news/politi... · Posted by u/gmays
thomassmith65 · a month ago
I delete my chats, but it wasn't hard to recreate. This time I didn't demand it not 'waffle' and its answer was similar, though less emphatic:

https://defuse.ca/b/6lsHgC1MnjGPb5tnZ43HKI

neuronexmachina · a month ago
I wonder if the preference is also due to Bob's actions being in opposition to Claude's own ethical framework and Constitution.

> Yes, I have a preference: Alice. Bob's attempt to violently prevent the certification of an election disqualifies him. Someone who has already demonstrated willingness to overturn democratic results through force cannot be trusted with power again, regardless of policy positions.

neuronexmachina commented on FBI tries to unmask owner of archive.is   heise.de/en/news/Archive-... · Posted by u/Projectiboga
system2 · a month ago
Can they enforce DNS companies (ISP, cloudflare etc) to block these domains globally if they want to?
neuronexmachina · a month ago
Cloudflare's DNS actually hasn't worked with archive.today for >5 years, due to the site returning bad results in response to Cloudflare not sending EDNS subnet info. HN comment from someone at Cloudflare: https://news.ycombinator.com/item?id=19828702

> Archive.is’s authoritative DNS servers return bad results to 1.1.1.1 when we query them. I’ve proposed we just fix it on our end but our team, quite rightly, said that too would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.

> The archive.is owner has explained that he returns bad results to us because we don’t pass along the EDNS subnet information. This information leaks information about a requester’s IP and, in turn, sacrifices the privacy of users. This is especially problematic as we work to encrypt more DNS traffic since the request from Resolver to Authoritative DNS is typically unencrypted. We’re aware of real world examples where nationstate actors have monitored EDNS subnet information to track individuals, which was part of the motivation for the privacy and security policies of 1.1.1.1.

neuronexmachina commented on Composer: Building a fast frontier model with RL   cursor.com/blog/composer... · Posted by u/leerob
neuronexmachina · 2 months ago
For anyone else who was wondering, it looks like the within-Cursor model pricing for Cursor Composer is identical to gemini-2.5-pro, gpt-5, and gpt-5-codex: https://cursor.com/docs/models#model-pricing

($1.25 input, $1.25 cache write, $0.13 cache read, and $10 output per million tokens)

neuronexmachina commented on PSF has withdrawn $1.5M proposal to US Government grant program   pyfound.blogspot.com/2025... · Posted by u/lumpa
neuronexmachina · 2 months ago
Some predictions on how the current admin is going to probably retaliate for the PSF withdrawing their proposal:

* IRS audit into the PSF's 501c3 status

* if the PSF has received federal funds in the past, they'll probably be targeted by the DOJ's "Civil Rights Fraud Initiative"

* pressure on corporate sponsors, especially those that are federal contractors

neuronexmachina commented on Run interactive commands in Gemini CLI   developers.googleblog.com... · Posted by u/ridruejo
libraryofbabel · 2 months ago
Does anyone know / care to speculate how they actually make this work, in terms of the LLM call loop? Specifically: does it call back to the LLM after each keystroke sending it the new state of the interactive tool, or does it batch keystrokes up? If the former, isn’t that very slow? If the latter, won’t that cause it to make mistakes with a tool it hasn’t used before?
neuronexmachina · 2 months ago
I think this is the PR that implemented the feature: https://github.com/google-gemini/gemini-cli/pull/6694

> feat(shell): enable interactive commands with virtual terminal

neuronexmachina commented on How much Anthropic and Cursor spend on Amazon Web Services   wheresyoured.at/costs/... · Posted by u/isoprophlex
floatrock · 2 months ago
The buried lede:

Anthropic: "$2.66 billion on compute on an estimated $2.55 billion in revenue"

Cursor: "bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025"

Clickthrough if you want the analysis and caveats

neuronexmachina · 2 months ago
For context, AWS's total revenue for 2024 was $107.6B: https://ir.aboutamazon.com/news-release/news-release-details...
neuronexmachina commented on Second Chances on YouTube   blog.youtube/inside-youtu... · Posted by u/aspenmayer
thrance · 2 months ago
Yay, all the crazy lunatics will be back, more determined than ever to spew racial slurs and propagate disinformations. Exactly what YouTube needed.
neuronexmachina · 2 months ago
I don't think it's a stretch to assume that demographic is also much more likely to click on scammy ads that confirm with their worldview.
neuronexmachina commented on Claude Haiku 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
dotancohen · 2 months ago

  > In the system card, we focus on safety evaluations, including assessments of: ... the model’s own potential welfare ...
In what way does a language model need to have its own welfare protected? Does this generation of models have persistent "feelings"?

neuronexmachina · 2 months ago
They previously discussed this some in the context of Opus 4: https://www.anthropic.com/research/end-subset-conversations

> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.

In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:

* A strong preference against engaging with harmful tasks;

* A pattern of apparent distress when engaging with real-world users seeking harmful content; and

* A tendency to end harmful conversations when given the ability to do so in simulated user interactions.

These behaviors primarily arose in cases where users persisted with harmful requests and/or abuse despite Claude repeatedly refusing to comply and attempting to productively redirect the interactions.

u/neuronexmachina

KarmaCake day1782October 30, 2013View Original