Readit News logoReadit News
themanmaran commented on Halt and Catch Fire Syllabus (2021)   bits.ashleyblewer.com/hal... · Posted by u/Kye
hnlmorg · 6 days ago
I was largely disappointed. The subject matter was special but the execution was over the top. Every time I was starting to get drawn in, there would be another affair, car crash, exploding lorry or something else just as forced. I can’t even remember the number of times it felt like they’d “jumped the shark” in even just the first season alone.

There really wasn’t any need for half the dumb shit they did in that show. It didn’t add to the drama, it just made the whole thing feel completely fake. Which is impressive considering they’re writing largely about real world computing history.

And don’t get me started on the characters themselves. I think I liked maybe half the cast. The others made me cringe every time they were on screen.

It’s such a pity because they could have just as successful show if they refined it a little.

themanmaran · 5 days ago
I'm glad to see I'm not alone here! Was really excited about it, and tried pretty hard to make it through the first season but couldn't make it.

It just had too much of that early 2000's cable TV style drama. Which I understand is required since it was on network tv. I honestly think if it was made again today as a netflix/prime series it would be a lot better.

themanmaran commented on Show HN: Bizcardz.ai – Custom metal business cards   github.com/rhodey/bizcard... · Posted by u/rhodey
rhodey · 10 days ago
The FAQ page is linked to from the home page

https://bizcardz.ai/faq

On the FAQ page there are links to images of the end result / physical

themanmaran · 10 days ago
It would still be a lot nicer to see a sample in the repo you linked.

Instead of: Gihub Link => bizcardz => FAQ => "Show me the end result"

themanmaran commented on Show HN: Bizcardz.ai – Custom metal business cards   github.com/rhodey/bizcard... · Posted by u/rhodey
themanmaran · 10 days ago
It would be nice to see an actual picture of the physical business card here. Also do you handle sending the design to a manufacturer, or do I need to download and send myself?
themanmaran commented on Class-action suit claims Otter AI records private work conversations   npr.org/2025/08/15/g-s1-8... · Posted by u/nsedlet
bilekas · 12 days ago
> Last year, an AI researcher and engineer said Otter had recorded a Zoom meeting with investors, then shared with him a transcription of the chat including "intimate, confidential details" about a business discussed after he had left the meeting. Those portions of the conversation ended up killing a deal,

I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you.

themanmaran · 12 days ago
This just seems like massive user error. The same thing could have happened in a low tech environment. And the notetaker just made it more obvious.

Ex: Hop on a conference call with a group of people, Person A "leaves early" but doesn't hang up the phone, then the remaining group talks about sensitive info they didn't want Person A to hear.

themanmaran commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
patrickhogan1 · 14 days ago
Do you have a concrete example of what you mean?

For example, the article above was insightful. But the authors pointing to 1,000s of disparate workflows that could be solved with the right context, without actually providing 1 concrete example of how he accomplishes this makes the post weaker.

themanmaran · 13 days ago
Sure, concrete example. We do conversational AI for banks, and spend a lot of time on the compliance side. Biggest thing is we don't want the LLM to ever give back an answer that could violate something like ECOA.

So every message that gets generated by the first LLM is then passed to a second series of LLM requests + a distilled version of the legislation. ex: "Does this message imply likelihood of credit approval (True/False)". Then we can score the original LLM response based on that rubric.

All of the compliance checks are very standardized, and have very little reasoning requirements, since they can mostly be distilled into a series of ~20 booleans.

themanmaran commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
mrlongroots · 15 days ago
I very much disagree. To attempt a proof by contradiction:

Let us assume that the author's premise is correct, and LLMs are plenty powerful given the right context. Can an LLM recognize the context deficit and frame the right questions to ask?

They can not: LLMs have no ability to understand when to stop and ask for directions. They routinely produce contradictions, fail simple tasks like counting the letters in a word etc. etc. They can not even reliably execute my "ok modify this text in canvas" vs "leave canvas alone, provide suggestions in chat, apply an edit once approved" instructions.

themanmaran · 14 days ago
This depends on whether you mean LLMs in the sense of single shot, or LLMs + software built around it. I think a lot of people conflate the two.

In our application e use a multi-step check_knowledge_base workflow before and after each LLM request. Pretty much, make a separate LLM request to check the query against the existing context to see if more info is needed, and a second check after generation to see if output text exceeded it's knowledge base.

And the results are really good. Now coding agents in your example are definitely stepwise more complex, but the same guardrails can apply.

themanmaran commented on Using AI to secure AI   mattsayar.com/letting-inm... · Posted by u/MattSayar
ryao · 15 days ago
The quotation is more impactful in the original Latin: Quis custodiet ipsos custodes?
themanmaran · 15 days ago
custodes[.]ai would be a great startup name
themanmaran commented on Don't bother parsing: Just use images for RAG   morphik.ai/blog/stop-pars... · Posted by u/Adityav369
ArnavAgrawal03 · a month ago
That's an interesting point. We've found that for most use cases, over 5 pages of context is overkill. Having a small LLM conversion layer on top of images also ends up working pretty well (i.e. instead of direct OCR, passing batches of 5 images - if you really need that many - to smaller vision models and having them extract the most important points from the document).

We're currently researching surgery on the cache or attention maps for LLMs to have larger batches of images work better. Seems like Sliding window or Infinite Retrieval might be promising directions to go into.

Also - and this is speculation - I think that the jump in multimodal capabilities that we're seeing from models is only going to increase, meaning long-context for images is probably not going to be a huge blocker as models improve.

themanmaran · a month ago
This just depends a lot on how well you can parse down the context prior to passing to an LLM.

Ex: Reading contracts or legal documents. Usually a 50 page document that you can't very effectively cherry pick from. Since different clauses or sections will be referenced multiple times across the full document.

In these scenarios, it's almost always better to pass the full document into the LLM rather than running RAG. And if you're passing the full document it's better as text rather than images.

themanmaran commented on Don't bother parsing: Just use images for RAG   morphik.ai/blog/stop-pars... · Posted by u/Adityav369
themanmaran · a month ago
Hey we've done a lot of research on this side [1] (OCR vs direct image + general LLM benchmarking).

The biggest problem with direct image extraction is multipage documents. We found that single page extraction (OCR=>LLM vs Image=LLM) slightly favored the direct image extraction. But anything beyond 5 images had a sharp fall off in accuracy compared to OCR first.

Which makes sense, long context recall over text is already a hard problem, but that's what LLMs are optimized for. Long context recall over images is still pretty bad.

[1] https://getomni.ai/blog/ocr-benchmark

themanmaran commented on Ex-Waymo engineers launch Bedrock Robotics to automate construction   techcrunch.com/2025/07/16... · Posted by u/boulos
themanmaran · a month ago
One big barrier I haven't seen mentioned is all the OEM competition they are going to face.

Caterpillar, John Deer, etc. already have remote operation vehicles. And a lot of provisions on what types of kits can be retrofitted onto their equipment without violating their terms/warranties.

I'm sure this is already something they've taken into consideration, but it seems like this will be more focused on partnerships with existing OEMs rather than selling add on kits to current fleets.

u/themanmaran

KarmaCake day2608January 8, 2020
About
Founder at OmniAi https://getomni.ai

Open source OCR: https://github.com/getomni-ai/zerox

https://twitter.com/TylerMaran

View Original