Readit News logoReadit News
solotronics commented on Hacking Moltbook   wiz.io/blog/exposed-moltb... · Posted by u/galnagli
roywiggins · a month ago
> The platform had no mechanism to verify whether an "agent" was actually AI or just a human with a script.

Well, yeah. How would you even do a reverse CAPTCHA?

solotronics · a month ago
Probably have it do 10 trivial for AI but hard for people tasks within a small time frame.
solotronics commented on Think   en.wikipedia.org/wiki/Thi... · Posted by u/tosh
solotronics commented on We gave 5 LLMs $100K to trade stocks for 8 months   aitradearena.com/research... · Posted by u/cheeseblubber
hxtk · 3 months ago
I suspect trading firms have already done this to the maximum extent that it's profitable to do so. I think if you were to integrate LLMs into a trading algorithm, you would need to incorporate more than just signals from the market itself. For example, I hazard a guess you could outperform a model that operates purely on market data with a model that also includes a vector embedding of a selection of key social and news media accounts or other information sources that have historically been difficult to encode until LLMs.
solotronics · 3 months ago
The part people are missing here is that if the trading firms are all doing something, that in itself influences the market.

If they are all giving the LLMs money to invest and the AIs generally buy the same group of stocks, those stocks will go up. As more people attempt the strategy it infuses fresh capital and more importantly signaling to the trading firms there are inflows to these stocks. I think its probably a reflexive loop at this point.

solotronics commented on Ghostty compiled to WASM with xterm.js API compatibility   github.com/coder/ghostty-... · Posted by u/kylecarbs
solotronics · 4 months ago
I always thought it would be interesting in backend systems to catch a certain exceptions and auto-generate a link to a shell. Given the proper authentication is implemented would this be a good tool to achieve that "remote debug" shell?
solotronics commented on FBI Director Waived Polygraph Security Screening for Three Senior Staff   propublica.org/article/fb... · Posted by u/Jimmc414
solotronics · 4 months ago
Now that we have real time brain scanning techniques I bet there is something way more accurate than a polygraph out there now.
solotronics commented on We need a clearer framework for AI-assisted contributions to open source   samsaffron.com/archive/20... · Posted by u/keybits
andy99 · 5 months ago
This is a problem everywhere now, and not just in code. It now takes zero effort to produce something, whether code or a work plan or “deep research” and then lob it over the fence, expecting people to review and act upon it.

It’s an extension of the asymmetric bullshit principle IMO, and I think now all workplaces / projects need norms about this.

solotronics · 5 months ago
This problem statement was actually where the idea for Proof of Work (aka mining) in bitcoin came from. It evolved out of the idea of requiring a computational proof of work for sending an email via cypherpunk remailers as a way of fighting spam. The idea being only a legitimate or determined sender would put in the "proof of work" to use the remailer.

I wonder how it would look if open source projects required $5 to submit a PR or ticket and then paid out a bounty to the successful or at least reasonable PRs. Essentially a "paid proof of legitimacy".

solotronics commented on Poker Tournament for LLMs   pokerbattle.ai/event... · Posted by u/SweetSoftPillow
eclark · 5 months ago
To play GTO currently you need to play hand ranges. (For example when looking at a hand I would think: I could have AKs-ATs, QQ-99, and she/he could have JT-98s, 99-44, so my next move will act like I have strength and they don't because the board doesn't contain any low cards). We have do this since you can't always bet 4x pot when you have aces, the opponents will always know your hand strength directly.

LLM's aren't capable of this deception. They can't be told that they have some thing, pretend like they have something else, and then revert to gound truth. Their egar nature with large context leads to them getting confused.

On top of that there's a lot of precise math. In no limit the bets are not capped, so you can bet 9.2 big blinds in a spot. That could be profitable because your opponents will call and lose (eg the players willing to pay that sometimes have hands that you can beat). However betting 9.8 big blinds might be enough to scare off the good hands. So there's a lot of probiblity math with multiplication.

Deep math with multiplication and accuracy are not the forte of llm's.

solotronics · 5 months ago
If you could, theoretically, make a LLM that could actually excel at poker would that mean that it is good at lying to people?
solotronics commented on 60 years after Gemini, newly processed images reveal details   arstechnica.com/space/202... · Posted by u/sohkamyung
t1234s · 6 months ago
If you are ever able to make it to the KSC visitor complex in Cape Canaveral they have mock-ups of both the Gemini and earlier Mercury capsules you can get in as a size reference. They are both incredibly tight. It's amazing during Gemini 7 they spent 14 days crammed in the capsule testing systems, doing EVA activity along with normal human activity (eating, sleeping, bodily functions). All while being seconds from death at any time if things go wrong. These early astronauts were men of a different caliber.
solotronics · 6 months ago
For anybody near DFW the Apollo 7 Command Module is on display at the Frontiers of Flight museum at Dallas Love Field. Its pretty amazing to see it in person and think about the engineering involved. https://en.wikipedia.org/wiki/Apollo_7#/media/File:Apollo_7_...
solotronics commented on You're Not Interviewing for the Job. You're Auditioning for the Job Title   idiallo.com/blog/performi... · Posted by u/foxfired
AlwaysRock · 6 months ago
Why did you design it that way then? Actually asking, not taking the piss.
solotronics · 6 months ago
Sorry I meant the systems we built not the interview itself.
solotronics commented on You're Not Interviewing for the Job. You're Auditioning for the Job Title   idiallo.com/blog/performi... · Posted by u/foxfired
solotronics · 6 months ago
It's been 10 years since I did an interview and I think I would rather retire and grow rare lizards than jump through the interview hoops at a new company. I am 90% sure I couldn't pass the interview for my current position but I'm the one who designed the whole thing. -staff level backend engineer

u/solotronics

KarmaCake day2056June 29, 2015View Original