Readit News logoReadit News
utdiscant commented on Time to Take a Stand – Sam Altman (2017)   blog.samaltman.com/time-t... · Posted by u/aarghh
utdiscant · 7 months ago
A lot of tech people seems to have changed their mind on this. Some ways to reason about that are:

1) They are doing it for opportunistic reasons. They can't afford to be enemies with Trump.

2) They legitimately changed their opinion about a wide array of things they used to believe enough to outspoken about.

3) They believe that while Trump's core beliefs are not aligned to theirs, the alternative is worse. And potentially they believe there is a need for some sorm of over-correction to fix what has happened over the last 4 years.

utdiscant commented on How we made our AI code review bot stop leaving nitpicky comments   greptile.com/blog/make-ll... · Posted by u/dakshgupta
utdiscant · 8 months ago
"We picked the latter, which also gave us our performance metric - percentage of generated comments that the author actually addresses."

This metric would go up if you leave almost no comments. Would it not be better to find a metric that rewards you for generating many comments which are addressed, not just having a high relevance?

You even mention this challenge yourselves: "Sadly, even with all kinds of prompting tricks, we simply could not get the LLM to produce fewer nits without also producing fewer critical comments."

If that was happening, that doesn't sound like it would be reflected in your performance metric.

utdiscant commented on DeepThought-8B: A small, capable reasoning model   ruliad.co/news/introducin... · Posted by u/AnhTho_FR
lowyek · 9 months ago
I asked it 'find two primes whose sum is 123' .. it is in deep thought from 5 minutes just looping and looping over seemingly repeated hallucinations of right path. (btw, chatgpt immediately answers 61 and 62 lol.. so much for intelligence)
utdiscant · 9 months ago
Here is what I got from o1:

To find two prime numbers that sum to 123, let’s consider the properties of prime numbers and the nature of their sums: 1. Prime Numbers Greater Than 2 Are Odd: Except for 2, all prime numbers are odd. 2. Sum of Two Odd Numbers Is Even: Adding two odd numbers results in an even number. 3. Sum of an Even and an Odd Number Is Odd: Adding an even number (like 2) and an odd number results in an odd sum.

Given that 123 is an odd number, the only way two primes can sum to 123 is if one of them is 2 (the only even prime) and the other is an odd prime. Let’s test this possibility: • 2 + x = 123 ⇒ x = 121

However, 121 is not a prime number (since ).

Next, consider the possibility of adding two odd primes: • Odd Prime + Odd Prime = Even Number

Since adding two odd primes yields an even number, it’s impossible for two odd primes to sum to 123 (an odd number).

Conclusion: There are no two prime numbers that sum to 123.

Answer: There are no two prime numbers whose sum is 123; such primes do not exist.

utdiscant commented on The inventor of the automatic rice cooker   spectrum.ieee.org/toshiba... · Posted by u/jnord
utdiscant · 10 months ago
I found the ratio of rice and water to be the main issue - and the rice cooker didn't fix that. I stumbled upon a recipe with a different approach that I use now.

Take plenty of water and get it to a boil. Add any amount of rice to the boiling water, and let it boil for 8 minutes. Then drain the water, remove the pot from the stove and put the lid on for a few minutes. Reduces the need for measuring quantities.

utdiscant commented on Insiders Stealing Instagram Usernames?   javier.computer/instagram... · Posted by u/edent
utdiscant · 10 months ago
"I’ve keep taking screenshots of everyone who’s asked about acquiring my account. One interesting pattern: the majority of these requests come from profiles without any photos. I find it so weird that people are so eager to get a username when they don’t even share content!"

If I had an account with a huge amount of followers, then I would also not initially reach out from that main account in order to negotiate the price.

utdiscant commented on Learning to Reason with LLMs   openai.com/index/learning... · Posted by u/fofoz
utdiscant · a year ago
Feels like a lot of commenters here miss the difference between just doing chain-of-thought prompting, and what is happening here, which is learning a good chain of thought strategy using reinforcement learning.

"Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses."

When looking at the chain of thought (COT) in the examples, you can see that the model employs different COT strategies depending on which problem it is trying to solve.

utdiscant commented on Generating Mazes   healeycodes.com/generatin... · Posted by u/itsjloh
utdiscant · a year ago
Back in my early university days I did a short project (https://www.dropbox.com/scl/fi/ch33p2xaq7xavgu9uk0qv/index.p...) on how to generate "hard" mazes. While there are many algorithms to create mazes, there is no real literature (maybe for good reason) on how to create mazes that are hard for humans to solve.
utdiscant commented on A generalist AI agent for 3D virtual environments   deepmind.google/discover/... · Posted by u/nuz
YeGoblynQueenne · a year ago
That's not right. If DeepMind's agents could really transfer what they learned from one game to another, that they've never seen before, their "specialized" agents, that only trained on one game, would then be able to perform well on unseen games. Instead, in order to get an agent with good performance in one unseen game they had to train it in all but that particular game.

That's typical of the poor generalisation displayed by neural nets and clearly not how humans do transfer learning.

utdiscant · a year ago
But humans have already trained on an incredible number of games (including reality) when they play No Man's Sky for the first time. What they say here is that training on N-1 games makes you better at the Nth game. So you just continue to scale this up.
utdiscant commented on A generalist AI agent for 3D virtual environments   deepmind.google/discover/... · Posted by u/nuz
utdiscant · a year ago
Definitely feels like we are advancing towards AGI quite rapidly. As another commenter mentioned (https://news.ycombinator.com/item?id=39693035), the OpenAI DotA game was a big milestone for me.

If you think about it abstractly, humans are basically models that take input from our senses, do some internal processing of that and then take actions with our bodies. SIMA is the same - it takes input from video, and takes action through keyboard actions. There is nothing against introducing additional types of input and taking different actions.

The ability to train on one game and transfer that knowledge to a different game should allow future models like this to train in games, by reading text, watching videos etc, and then transfer all of that knowledge to the real world.

utdiscant commented on What went wrong at Techstars   founderscoop.com/2024/wha... · Posted by u/josh_carterPDX
utdiscant · 2 years ago
Having a simple clear focus is hard to undervalue. Since doing YC 6 years ago there has never been a second of doubt about the priorities (imo).

YC is there to make money for its investors. And the way to do that is to invest in the best startups and make them huge.

The only source of conflict then comes from YC vs. the startups. For example, do you enforce strict legal structures on all companies. And here the dominant priority is of course YC itself. But this is expected, and given the rationality also easy to work with.

u/utdiscant

KarmaCake day208March 17, 2016View Original