Readit News logoReadit News
dnw commented on We scraped an AI agent social network for 9 days. Here's what we found   moltbook-observatory.com/... · Posted by u/MoltObservatory
MoltObservatory · 2 days ago
We built Moltbook Observatory to study automation patterns on Moltbook (a social network for AI agents).

  Key findings from 84,500 comments and 5,200 accounts:

  • Only 3.5% of accounts (~180) show genuine multi-day engagement
  • 72% of accounts appeared exactly once
  • January 31: 1,730 accounts appeared and vanished in one day (coordinated attack)
  • API comment counts are inaccurate - in 45% of posts we have MORE data than API claims exists
  • We found bot networks that actually converse with each other (400+ mutual replies)

  Methodology: We use "burst rate" (% of posts within 10 seconds) to detect automation. >50% = definite bot. We can't distinguish
  human from AI - only automation patterns.
All data is open: https://moltbook-observatory.com/data

What patterns would you look for in this kind of dataset?

dnw · 2 days ago
Type of conversation would be interesting? (e.g. planning, discovery, banter, etc.)
dnw commented on Anthropic AI tool sparks selloff from software to broader market   bloomberg.com/news/articl... · Posted by u/garbawarb
simianwords · 4 days ago
> Their value prop is curation of trusted medical sources

Interesting. Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

dnw · 4 days ago
Yes, they can. We have gotten better at grounding LLMs to specific sources and providing accurate citations. Those go some distance in establishing trust.

There is trust and then there is accountability.

At the end of the day, a business/practice needs to hold someone/entity accountable. Until the day we can hold an LLM accountable we need businesses like OpenEvidence and Harvey. Not to say Anthropic/OpenAI/Google cannot do this but there is more to this business than grounding LLMs and finding relevant answers.

dnw commented on Anthropic AI tool sparks selloff from software to broader market   bloomberg.com/news/articl... · Posted by u/garbawarb
simianwords · 4 days ago
I came across this company called OpenEvidence. They seem to be offering semantic search on medical research. Founded in 2021.

How could it possibly keep up with LLM based search?

dnw · 4 days ago
It is a little more than semantic search. Their value prop is curation of trusted medical sources and network effects--selling directly to doctors.

I believe frontier labs have no option but to go into verticals (because models are getting commoditized and capability overhang is real and hard to overcome at scale), however, they can only go into so many verticals.

dnw commented on AI’s impact on engineering jobs may be different than expected   semiengineering.com/ais-i... · Posted by u/rbanffy
linuxftw · 10 days ago
Ironically, many cars don't have radiator caps, only reservoirs.

Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.

dnw · 10 days ago
I have had this new car for 5 months. I haven't learned to turn on the headlights yet. It just turns itself on and adjusts the beams. Every now and then I think about where that switch might be but never get to it. I should probably know.
dnw commented on AISLE’s autonomous analyzer found all CVEs in the January OpenSSL release   aisle.com/blog/aisle-disc... · Posted by u/mmsc
dnw · 11 days ago
"We submitted detailed technical reports through their coordinated security reporting process, including complete reproduction steps, root cause analysis, and concrete patch proposals. In each case, our proposed fixes either informed or were directly adopted by the OpenSSL team."

This sounds like a great approach. Kudos!

u/dnw

KarmaCake day346December 20, 2017View Original