Readit News logoReadit News
nmitchko commented on In 2025, Meta paid an effective federal tax rate of 3.5%   bsky.app/profile/rbreich.... · Posted by u/doener
nmitchko · 18 days ago
Can someone make a startup that allows me to do this as an individual?
nmitchko commented on Show HN: Semantic search over the National Gallery of Art   nga.demo.mixedbread.com/... · Posted by u/breadislove
nmitchko · 5 months ago
In case anyone wants to do this themselves, check out the pipeline here: https://github.com/isc-nmitchko/iris-document-search

Colnomic and nvidia models are great for embedding images and MUVERA can transform those to 1D vectors.

nmitchko commented on Qwen3-Omni: Native Omni AI model for text, image and video   github.com/QwenLM/Qwen3-O... · Posted by u/meetpateltech
nmitchko · 6 months ago
Next steps for AI in general:

  - additional modalities
  - Faster FPS (inferences per second)
  - Reaction time tuning (latency vs quality tradeoff) for visual and audio inputs/outputs
  - built-in planning modules in the architecture (think premotor frontal lobe)
  - time awareness during inference (towards an always inferring / always learning architecture)

nmitchko commented on LLMD: A Large Language Model for Interpreting Longitudinal Medical Records   arxiv.org/abs/2410.12860... · Posted by u/troyastorino
st-at-picnic · a year ago
Steve here, one of the co-authors. Totally valid on OpenBio. I will say that comparison numbers for this paper were such a challenge, in part because we found that a lot of the LLMs on the Medical LLM leaderboard struggled to follow even slight changes in instructions. On one hand it felt inaccurate to just print '[something very low]% Accuracy' on structuring/abstraction tasks and call it a day, but it also seemed like the amount of engineering effort needed to get non-trivial results from those LLMs was saying something important about how they worked.

I think that's especially true when you look at how well GPT-4o worked out of the box -- it makes clear what you get from the battle-hardening that's done to the big commercial models. For the numbers we did include, the thought was that was the most meaningful signal was that going from 8B to 70B with Llama3 actually gives you a lot in terms of mitigating that brittleness. That goes a step towards explaining the story of what we're seeing, moreso than showing a bunch of comparison LLMs fall over out of the box.

In the end, we presented those models that did best with light tuning and optimization (say a week's worth of iteration or so). I anticipate that we'll have to expand these results to include OpenBio as we work through the conference reviewer gauntlet. Any others you think we definitely should work to include? Would definitely be helpful!

nmitchko · a year ago
No other models that are public worth comparing to... Hippocratic advertises good benchmarks but that might be marketing fluff.

Have you checked out dataset building with nemotron? The nemotron synthetic data builder is quite powerful.

Moreso, check out model merging. It's possible if you merge some of your model against llama3.1 base it may perform much better.

Check out max labonne's work on hugging face

nmitchko commented on LLMD: A Large Language Model for Interpreting Longitudinal Medical Records   arxiv.org/abs/2410.12860... · Posted by u/troyastorino
nmitchko · a year ago
Interesting they don't compare to open-bio. Page 7 charts are quite weak.

https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B

nmitchko commented on Show HN: We built pitchpilot, an AI synthesizer to tailor presentations   pitchpilot.xyz... · Posted by u/nmitchko
nmitchko · 2 years ago
We're excited to share pitchpilot with the HN community. Our beta users have found the embedded audio particularly useful for enterprise sharing. We're keen to keep improving, and our mission is to make communication easier.

In the roadmap is adding video export, digital twin presentations, and real-time presentations. We don't wrap a public LLM, so we don't share any data.

nmitchko commented on US intelligence community is embracing generative AI   nextgov.com/artificial-in... · Posted by u/belter
nmitchko · 2 years ago
Given that Generative AI can now read brain scans [1] and this, I wonder how far away we are from "you thought negatively about something, the authorities are on their way".

[1] -- https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3

nmitchko commented on Superfast Microsoft AI is first to predict air pollution for the whole world   nature.com/articles/d4158... · Posted by u/Brajeshwar
nmitchko · 2 years ago
Tin-foil hat time:

1. First, models will predict pollution. The outcomes will help shape urban policy. But these won't solve crime or stop people from driving.

2. Second, models will predict individual behavior and track person level emissions. The outcomes will force behavior changes, mostly freedom limiting.

3. Third, and finally, models will predict thoughts. The the thought of driving instead of walking might trigger a response.

It's a slippery slope and we need to draw a line between prediction and policy.

nmitchko commented on Launch HN: Metriport (YC S22) – Open-source API for healthcare data exchange    · Posted by u/dgoncharov
nmitchko · 2 years ago
How does this compare to ehealthexchange or other qhins that have many years of experience and charge lower costs?

u/nmitchko

KarmaCake day103January 11, 2022View Original