Readit News logoReadit News
craigdalton commented on GPT-5.3-Codex   openai.com/index/introduc... · Posted by u/meetpateltech
halfcat · a month ago
In this new world, why stop there? It would be even better if engineers were also medical doctors and held multiple doctorate degrees in mathematics and physics and also were rockstar sales people.
craigdalton · a month ago
As a doctor, this sounds like an engineers job.
craigdalton commented on Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)   getevidex.com... · Posted by u/amber_raza
amber_raza · 3 months ago
You just articulated the 'Holy Grail' of automated appraisal. Detecting bias across a career is a massive graph problem compared to checking a single paper. It essentially requires auditing an entire bibliography before synthesis.

I am adding 'Author Reputation/Bias Analysis' to the long-term roadmap. Thanks for the rigorous stress-test today.

craigdalton · 3 months ago
How will you do this, one author I don't trust (sent them an error they missed in their paper - didnt correct it, has systemic bias in their writing) was invited to write a review article by the New England Journal of Medicine - has an excellent reputation for all the world to see.
craigdalton commented on Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)   getevidex.com... · Posted by u/amber_raza
amber_raza · 3 months ago
This is a fantastic critique. Spot on. Freshness without appraisal is just an accelerated firehose of noise.

1. The Garbage Filter: Right now, I rely on a strict Hierarchy of Evidence to mitigate this (prioritizing Cochrane/Meta-analyses over observational studies), but you are absolutely right that LLMs can miss fatal methodological flaws in a single, high-ranking paper.

2. The 'Critic' Agent: I’m currently experimenting with a secondary 'Critic' pass. This is an LLM agent specifically prompted to act as a skeptic/methodologist to flag limitations before the main synthesis happens.

3. Multi-discipline prompting: The prompt you provided is a great case study in persona-based auditing. I’d love to learn more about the specific 'disciplines' or archetypes you’ve found most effective at catching these flaws. That is exactly the kind of domain expertise I’m trying to encode into the system.

craigdalton · 3 months ago
The personas have to paper specific I believe, addressing the content and methods. I guess an LLM could do a once over of the paper or meta-analysis to determine the best discipline specific personas - but would be interesting to test that. But there are also the benefits of deep expertise and understanding a field for decades. For example, I know a set of authors who repeatedly find significant associations in a field in almost every study they do, whereas others have variable results. They also seem to ignore good studies that disagree with their hypotheses and use inferior studies that support their position in review papers - so I dont really trust their work. It would be great if an LLM could develop that kind of understanding and somehow deprecate a body of work that had inherent author or institutional biases - even though on the surface the review looks legitimate. For a meta-analysis it is often the papers that are omitted that are most telling. That means the LLM will need to redo the entire search and synthesis - yikes!
craigdalton commented on Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)   getevidex.com... · Posted by u/amber_raza
OutOfHere · 3 months ago
I warn against prioritizing Cochrane. It will block essential information from surfacing. This holds science back for over a decade. The best way to make science emerge is to take peer-reviewed reviews and meta-analyses at face value. If a particular review is bad, it will soon be corrected by other reviews, so don't worry about it.
craigdalton · 3 months ago
I really disagree with this and there is ample evidence that science is not "self-correcting". Read Retraction Watch. I personally wrote to a journal on 3 occassions and phoned them twice to alert them to an error in a paper that the authors were reluctant to own up to and correct. I had inside knowledge and was able to provide the evidence of the error. Journal did nothing, they passed the message on to a range of sub editors (which were a revolving door), no investigation, no response. Google the "reproduciblity crisis" including the coverage of the issue in Nature to see how uncorrecting medical science can be.

Regarding Cochrane. It is reliable if is says a treatment does work, or an exposure has an effect, sometimes they miss effects because they only rely on particular sources of evidence e.g. RCTs, they were wrong on effectiveness of masks. As an example of reasonably up to date and evidence based free review sources on line - see Stat Pearls.

craigdalton commented on Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)   getevidex.com... · Posted by u/amber_raza
craigdalton · 3 months ago
Excuse the blunt metaphor, but there is a risk here of turning on a fire-hose of "fresh" garbage. John Ioannidis, one of the doyens of evidence based medicine very persuasively argues - Why Most Published Research Findings Are False https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/ That is why platforms pay physicians/epidemiologists/ specialists in their field hundreds of dollars per hour to sort the good from bad papers. After my training as a doctor I did a Masters in Clinical Epidemiology and spent an afternoon each week in a tutorial that reviewed papers in the top journals - about 20-30% of them had major flaws that were either ignored or dismissed by the authors. It may be worse now. LLMs still have trouble picking up the subtleties of medical science and will miss papers with major flaws. I just did a test on a paper that is often quoted as providing evidence of excess cancer risk in communities living close to unconventional gas facilities. When I asked ChatGPT 5.2 to review the pape for evidence of increase cancer risk with a simple prompt it said the paper found such a risk. However, when I wrote a multi-discipline based prompt for 5.2 and Gemini 3 pro, it found the fatal flaw in the paper and advised it did not provide evidence. See the prompt and consider how the prompts would have to be individually developed for each paper and meta-analysis.

For review of meta-analysis you would need prompts developed by expert methodologists and discipline specialists- here is the prompt that worked: You are an environmental epidemiologist and exposure scientist, critially review this papers claim that the measured levels of unconventional gas emissions provide evidence of excess cancer risk: https://link.springer.com/article/10.1186/1476-069X-13-82

craigdalton commented on Show HN: Amplift – AI agent for influencer marketing, GEO, and social listening   amplift.ai/... · Posted by u/dora_wu
craigdalton · 3 months ago
Where do you submit the one month free trial code on the site?
craigdalton commented on The New AI Consciousness Paper   astralcodexten.com/p/the-... · Posted by u/rbanffy
yannyu · 4 months ago
Let’s make an ironman assumption: maybe consciousness could arise entirely within a textual universe. No embodiment, no sensors, no physical grounding. Just patterns, symbols, and feedback loops inside a linguistic world. If that’s possible in principle, what would it look like? What would it require?

The missing variable in most debates is environmental coherence. Any conscious agent, textual or physical, has to inhabit a world whose structure is stable, self-consistent, and rich enough to support persistent internal dynamics. Even a purely symbolic mind would still need a coherent symbolic universe. And this is precisely where LLMs fall short, through no fault of their own. The universe they operate in isn’t a world—it’s a superposition of countless incompatible snippets of text. It has no unified physics, no consistent ontology, no object permanence, no stable causal texture. It’s a fragmented, discontinuous series of words and tokens held together by probability and dataset curation rather than coherent laws.

A conscious textual agent would need something like a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences. LLMs don’t have that. They exist in a shifting cloud of possibilities with no single consistent reality to anchor self-maintaining loops. They can generate pockets of local coherence, but they can’t accumulate global coherence across time.

So even if consciousness-in-text were possible in principle, the core requirement isn’t just architecture or emergent cleverness—it’s coherence of habitat. A conscious system, physical or textual, can only be as coherent as the world it lives in. And LLMs don’t live in a world today. They’re still prisoners in the cave, predicting symbols and shadows of worlds they never inhabit.

craigdalton · 4 months ago
"The universe they operate in isn’t a world—it’s a superposition of countless incompatible snippets of text. It has no unified physics, no consistent ontology, no object permanence, no stable causal texture. It’s a fragmented, discontinuous series of words and tokens held together by probability and dataset curation rather than coherent laws."

I think some physicists and Buddhists would say this exactly describes the world humans inhabit. They might also agree that we live in such a world with the illusion that we have: "a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences".

The more I see LLM emergent behaviour simulate,unexpectedly, that of human cognition. I think it tells us much about human cognition as llm behaviour.

Deleted Comment

craigdalton commented on Show HN: I collected 70k online communities – semantic search to find your niche   pluggo.ai/find-online-com... · Posted by u/giulioco
ibdf · 7 months ago
I searched for 1 community, got an error. Reached my "free" limit and was asked to sign up :(
craigdalton · 7 months ago
Same
craigdalton commented on Have I Been Pwned 2.0   troyhunt.com/have-i-been-... · Posted by u/LorenDB
anamexis · 10 months ago
I think the problem is:

1. How else would you penalize businesses?

2. What else would you do with fines?

If fines exist, it would seem foolish not to budget around that.

craigdalton · 10 months ago
How about fines go into a sovereign wealth fund (but not be seen as major source for the fund- more a bonus) so there is no short term budget planning based on fine revenue.

u/craigdalton

KarmaCake day13July 7, 2018
About
meet.hn/city/-32.9272881,151.7812534/Newcastle

Socials: - x.com/craigbdalton

---

View Original