Readit News logoReadit News
derangedHorse commented on How elites could shape mass preferences as AI reduces persuasion costs   arxiv.org/abs/2512.04047... · Posted by u/50kIters
deepsquirrelnet · 12 days ago
> Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account

> “Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

https://krebsonsecurity.com/2025/04/whistleblower-doge-sipho...

I’m surprised this didn’t make bigger news.

derangedHorse · 12 days ago
I'm genuinely confused about this story and the affiliated parties. I've actively tried to search for "Daniel Berulis" and couldn't find any results pointing to anything outside the confines of this story. I'm also suspicious of the lack of updates despite the fact that his lawyer, Andrew Bakaj, is a very public figure who just recently commented on a related matter without bringing up Berulis [1].

Meanwhile, the NLRB's acting press secretary denies this ever occurred [2]:

> Tim Bearese, the NLRB's acting press secretary, denied that the agency granted DOGE access to its systems and said DOGE had not requested access to the agency's systems. Bearese said the agency conducted an investigation after Berulis raised his concerns but "determined that no breach of agency systems occurred."

One can make the case that he's lying to protect the NLRB's reputation, but that claim has no more validity than Daniel Berulis himself lying to further his own political interests. Bearese has also been working his position since before the Trump administration started, holding the job since at least 2015. It's very hard for me to treat his account seriously, especially considering the political climate.

[1] https://www.spokesman.com/stories/2025/nov/18/us-federal-wor...

[2] https://news.wgcu.org/2025-04-15/5-takeaways-about-nprs-repo...

derangedHorse commented on How elites could shape mass preferences as AI reduces persuasion costs   arxiv.org/abs/2512.04047... · Posted by u/50kIters
nhod · 12 days ago
Sorry, no. Hanlon's razor is usually smart and correct, for the majority of cases, including this one.

In this case, it is a huge stretch to ascribe DOGE to incompetence or to stupidity. Thus, we CAN ascribe it to malice.

Elon Musk and Donald Trump are many things, but they are NOT stupid and NOT incompetent. Elon is the richest man in the world running some of the most innovative and important companies in the world. Donald Trump has managed to get elected twice despite the fact (because of the fact?) that he a serial liar and a convicted criminal.

They and other actors involved have demonstrated extraordinary malice, time and time again.

It is safe to ascribe this one to malice. And Hanlon's Razor holds.

derangedHorse · 12 days ago
Setting aside the concept of "stupidity" for a second because it's too hard to generally define for the sake of argumentation, one can absolutely be successful at some things and incompetent at others. Your expectations of their overall competency, as with most assumptions of malice, is what fuels your bias.
derangedHorse commented on How elites could shape mass preferences as AI reduces persuasion costs   arxiv.org/abs/2512.04047... · Posted by u/50kIters
themafia · 12 days ago
It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.

It's about hijacking all of your federal and commercial data that these companies can get their hands on and building a highly specific and detailed profile of you. DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir. Then using AI to either imitate you or to possibly predict your reactions to certain stimulus.

Then presumably the game is finding the best way to turn you into a human slave of the state. I assure you, they're not going to use twitter to manipulate your vote for the president, they have much deeper designs on your wealth and ultimately your own personhood.

It's too easy to punch down. I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.

derangedHorse · 12 days ago
> DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir

Do you have any actual evidence of this?

> I recommend anyone presume the best of actual people and the worst of our corporations and governments

Corporations and governments are made of actual people.

> Then presumably the game is finding the best way to turn you into a human slave of the state.

"the state" doesn't have one grand agenda for enslavement. I've met people who work for the state at various levels and the policies they support that might lead towards that end result are usually not intentionally doing so.

"Don't attribute to malice what can be explained by incompetence"

derangedHorse commented on But why is AI bad?   daymare.net/blogs/but-why... · Posted by u/victorbuilds
derangedHorse · 14 days ago
> Selling AI generated slop at full price

I think that’s for the market to correct. If people don’t spend money on AI generated products because they’re bad, that’ll send a signal to the company to pivot from their current strategy. If people are spending money on those things regardless, then maybe that’s an indicator that these processes create better output than what was present before. At the end of the day, in the absence of state intervention, the market will pick what’s best for them and consumers will react in ways that may surprise the online virtue signalers.

derangedHorse commented on But why is AI bad?   daymare.net/blogs/but-why... · Posted by u/victorbuilds
api · 14 days ago
This is a bubble.

It reminds me more of the dot.com bubble than any other. I was in college but also working in the field then and saw that one come and go.

It’s like the dot.com bubble in that yes, there is a lot of “there” there, but there is also a ton of premature hype and speculation. What reminds me most of dot.com is how people are shoehorning AI into everything to ride the hype wave. During dot.com there were cases of boring companies adding .com to their name or opening an online division and seeing their stocks increase 10X in 24 hours.

derangedHorse · 14 days ago
Everything seems like a bubble to the people who got burned in the dot com era. Hype is automatically attributed to there being money disproportionately flowing to vaporware. That isn’t what I’ve observed. I think a lot of the current AI companies will fail, but I don’t think it’ll be from a failure to deliver a product or generate revenue. I also don’t think there have been valuations or investments that are any more extreme than they have been in the last decade.
derangedHorse commented on It’s been a very hard year   bell.bz/its-been-a-very-h... · Posted by u/surprisetalk
derangedHorse · 15 days ago
> we won’t work on product marketing for AI stuff, from a moral standpoint

I fundamentally disagree with this stance. Labeling a whole category of technologies because of some perceived immorality that exists within the process of training, regardless of how, seems irrational.

derangedHorse commented on It’s been a very hard year   bell.bz/its-been-a-very-h... · Posted by u/surprisetalk
order-matters · 15 days ago
Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.

What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.

the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?

Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.

Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get

derangedHorse · 15 days ago
> but once the damage is done why let it happen in vain?

Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.

derangedHorse commented on Constant-time support coming to LLVM: Protecting cryptographic code   blog.trailofbits.com/2025... · Posted by u/ahlCVA
charcircuit · 21 days ago
These are meaningless without guarantees that the processor will run the instructions in constant time and not run the code as fast as possible. Claims like cmov on x86 always being constant time are dangerous because a microcode update could change that to not be the case anymore. Programmers want an actual guarantee that the code will take the same amount of time.

We should be asking our CPU vendors to support enabling a constant time mode of some sort for sensitive operations.

derangedHorse · 20 days ago
I agree. For use cases where side channel attacks are likely to be attempted, the security of the system ultimately depends on both the software and hardware used.
derangedHorse commented on Await Is Not a Context Switch: Understanding Python's Coroutines vs. Tasks   mergify.com/blog/await-is... · Posted by u/remyduthu
derangedHorse · 20 days ago
Ironically, by trying to explain awaitables in Python through comparison with other languages, the author shows how much he doesn’t understand the asynchronous models of other languages lol
derangedHorse commented on Brain has five 'eras' with adult mode not starting until early 30s   theguardian.com/science/2... · Posted by u/hackernj
javier123454321 · 21 days ago
It's not a given, but a personal anecdote is that there simply hasn't been a situation in my life prior to kids that required such a sustained focus on the happiness and wellbeing of another person before kids. It really is a type of growth that would be dare say impossible to duplicate without kids. But of course, I could say that I've never had to live through war and don't think that I could really say that I've built the fortitude that that experience gives you, so the point might be moot. Just to say, kids really give you a perspective, that choosing to be childless does not, while being childless is a perspective that all people with kids got.
derangedHorse · 21 days ago
> while being childless is a perspective that all people with kids got

This is a naive view of the world. Being childless is a qualitatively different experience for those in different walks of life. A childless financially unstable young adult will have a very different experience than that of a childless financially stable middle-aged adult.

u/derangedHorse

KarmaCake day1020January 21, 2018View Original