Readit News logoReadit News
seany62 commented on Built an AI tool to visualize large codebases - would love feedback    · Posted by u/ilia_khatko
seany62 · 3 months ago
devin ai does this amazingly
seany62 commented on Show HN: I made an AI that turn live lecture into structured notes,mind-maps,PDF   notorium.app... · Posted by u/pranav_harshan
seany62 · 3 months ago
Great idea! Will it highlight parts where the professor says something like "this is important and will be on the exam...". All of the information on the exam (which dictates the majority of your score in the class at most US universities) must be conveyed to the student one way or the other (worksheets, lectures. etc.). A cool runoff would be an "AI Exam Prep" which guessed what would be on the exam, based on previous exams and where the info came from
seany62 commented on I am (not) a failure: Lessons learned from six failed startup attempts   blog.rongarret.info/2025/... · Posted by u/lisper
jfengel · 7 months ago
My main lesson from running a startup: don't. And if you do, quit when the going gets tough. Perseverance does not pay off.

Obviously it doesn't always end badly. But we get a massively skewed view from survivor bias.

My life turned out pretty damn well once I got a plain ordinary job working for someone else. But I don't kid myself: when it comes to starting a startup, I did fail. The main lesson I learned was that I was always going to.

seany62 · 7 months ago
> My main lesson from running a startup: don't.

I hear this a lot and I think it is good advice because the only person who should actually start a startup is the one who sees this but still does it.

seany62 commented on Cerebrum: Simulate and infer synaptic connectivity in large-scale brain networks   svbrain.xyz/2024/12/20/ce... · Posted by u/notallm
seany62 · 8 months ago
Based on my very limited knowledge of how current "AI" systems work, this is the much better approach to achieving true AI. We've only modeled one small aspect of the human (the neuron) and brute forced it to work. It takes an LLM millions of examples to learn what a human can in a couple of minutes--then how are we even "close" to achieving AGI?

Should we not mimic our biology as closely as possible rather than trying to model how we __think__ it works (i.e. chain of thought, etc.). This is how neural networks got started, right? Recreate something nature has taken millions of years developing and see what happens. This stuff is so interesting.

seany62 commented on Launch HN: Midship (YC S24) – Turn PDFs, docs, and images into usable data    · Posted by u/maxmaio
seany62 · 10 months ago
Are users able to export their organized data?
seany62 commented on SimpleQA   openai.com/index/introduc... · Posted by u/surprisetalk
abhisuri97 · 10 months ago
seany62 · 10 months ago
Any way to see the actual questions and answers? Where can I find simple_qa_test_set.csv ?
seany62 commented on Computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku   anthropic.com/news/3-5-mo... · Posted by u/weirdcat
HarHarVeryFunny · 10 months ago
The "computer use" ability is extremely impressive!

This is a lot more than an agent able to use your computer as a tool (and understanding how to do that) - it's basically an autonomous reasoning agent that you can give a goal to, and it will then use reasoning, as well as it's access to your computer, to achieve that goal.

Take a look at their demo of using this for coding.

https://www.youtube.com/watch?v=vH2f7cjXjKI

This seems to be an OpenAI GPT-o1 killer - it may be using an agent to do reasoning (still not clear exactly what is under the hood) as opposed to GPT-o1 supposedly being a model (but still basically a loop around an LLM), but the reasoning it is able to achieve in pursuit of a real world goal is very impressive. It'd be mind boggling if we hadn't had the last few years to get used to this escalation of capabilities.

It's also interesting to consider this from POV of Anthropic's focus on AI safety. On their web site they have a bunch of advice on how to stay safe by sandboxing, limiting what it has access to, etc, but at the end of the day this is a very capable AI able to use your computer and browser to do whatever it deems necessary to achieve a requested goal. How far are we from paperclip optimization, or at least autonomous AI hacking ?

seany62 · 10 months ago
From what I'm seeing on GH, this could have technically already been built right? Is it not just taking screenshots of the computer screen and deciding what to do from their / looping until it gets to the solution ?
seany62 commented on Ask HN: How do you add guard rails in LLM response without breaking streaming?    · Posted by u/curious-tech-12
seany62 · 10 months ago
> Hi all, I am trying to build a simple LLM bot and want to add guard rails so that the LLM responses are constrained.

Give examples of how the LLM should respond. Always give it a default response as well (e.g. "If the user response does not fall into any of these categories, say x").

> I can manually add validation on the response but then it breaks streaming and hence is visibly slower in response.

I've had this exact issue (streaming + JSON). Here's how I approached it: 1. Instruct the LLM to return the key "test" in its response. 2. Make the streaming call. 3. Build your JSON response as a string as you get chunks from the stream. 4. Once you detect "key" in that string, start sending all subsequent chunks wherever you need. 5. Once you get the end quotation, end the stream.

u/seany62

KarmaCake day63February 6, 2024View Original