Readit News logoReadit News
sainez commented on LLMs aren't "trained on the internet" anymore   allenpike.com/2024/llms-t... · Posted by u/ingve
solidasparagus · 2 years ago
100 percent accurate will never happen, nor does it need to. But think about the intelligence of an average human. Can we beat that? At least along some collection of concrete axes, enough to create a form of intelligence that can rightfully be called general?

It remains to be seen, but the days where I scoffed at that idea are firmly in the past where they belong. Today we are building machines with intelligence high enough that it is forcing us to reconsider and redefine what intelligence is. And there is a huge amount of progress just sitting in front of us, waiting to be fed into models.

sainez · 2 years ago
Even AGI as a whole is overhyped. It's a valuable goal, but AI that beats humans on narrow metrics is still economically valuable because of scale.
sainez commented on GPT-4o   openai.com/index/hello-gp... · Posted by u/Lealen
mppm · 2 years ago
Such an impressive demo... but why did they have to give it this vapid, giggly socialite persona that makes me want to switch it off after thirty seconds?
sainez · 2 years ago
You should be able to adjust this with a system prompt, given that has end-to-end speech capabilities now
sainez commented on Ethereum has blobs. Where do we go from here?   vitalik.eth.limo/general/... · Posted by u/bpierre
talldayo · 2 years ago
Testing a dapp off the mainnet is like ensuring your website works on localhost. It will find some issues, but it's not representative of how it will look in deployment.

In any case, for actual usage it should surprise nobody why everyone conflates Ethereum with money. No, your L2 chain does not qualify as an official solution.

sainez · 2 years ago
> Testing a dapp off the mainnet is like ensuring your website works on localhost

I would argue the exact opposite. A website will be deployed to different versions of different browsers on different operating systems. A smart contract will exist on a single distributed computer. It sounds like the actual problem is people treating smart contract development as cavalierly as web app development

sainez commented on Ethereum has blobs. Where do we go from here?   vitalik.eth.limo/general/... · Posted by u/bpierre
ASinclair · 2 years ago
If you don’t know which chain you’re interacting with how can you trust your transactions are secured by a chain at all?
sainez · 2 years ago
How does this differ from e.g. online banking? Does every user manually check encryption algorithms and keys?
sainez commented on Ethereum has blobs. Where do we go from here?   vitalik.eth.limo/general/... · Posted by u/bpierre
hanniabu · 2 years ago
That's not how it works, not every interaction needs to be onchain
sainez · 2 years ago
Don't know why this is downvoted. It is possible (and probably desirable) to build applications where only certain data is stored on chain.
sainez commented on No star, No fix   github.com/daeuniverse/da... · Posted by u/rustdesk
andersonmvd · 2 years ago
I fail to see a reason where it would be a violation because of the following reasons:

1) it's a response to a user's request, i.e., not initiated by the repo author

2) it depends on consensus of the user.

3) it's not automated (most of the items from the policy are related to automation).

4) the repo author has no obligation whatsoever of maintaining the project. He is not paid or forced to do it.

5) if the user really wants to apply this change or disagrees with this practice, he/she can always fork it.

That said, I understand that it may still not feel "fair" compared to other projects that don't follow this practice. Or the feeling of "wanting to help but you're asked to do some things first".

Companies already do that to accept your pull requests though [0], which takes way longer than giving a star - and I didn't see a complaint about it on HN

[0] https://github.com/google/eddystone/issues/258

sainez · 2 years ago
It is at least partially automated. The issue was closed by a bot which requires a star to re-open.
sainez commented on Promptbase: All things prompt engineering   github.com/microsoft/prom... · Posted by u/CharlesW
intrasight · 2 years ago
You wouldn't say you're "prompt engineering" when communicating with your spouse or boss.
sainez · 2 years ago
But you would say you used "social engineering" to manipulate an organization: https://en.wikipedia.org/wiki/Social_engineering_(security)
sainez commented on Promptbase: All things prompt engineering   github.com/microsoft/prom... · Posted by u/CharlesW
wrs · 2 years ago
Yes, isn’t the whole “alignment” fear basically that if we had smarter than human AGI, we would need smarter than human prompt engineering?
sainez · 2 years ago
Alignment refers to the process of aligning AI with human values. I don't see why a superhuman AI would require different prompting than is in use today.
sainez commented on Switch Transformers C – 2048 experts (1.6T params for 3.1 TB) (2022)   huggingface.co/google/swi... · Posted by u/tosh
sainez · 2 years ago
I'm much more interested in lower parameter models that are optimized to punch above their weight. There is already interesting work done in this space with Mistral and Phi. I see research coming out virtually every week trying to address the low hanging fruit.
sainez commented on Emmett Shear becomes interim OpenAI CEO as Altman talks break down   theverge.com/2023/11/20/2... · Posted by u/andsoitis
mcv · 2 years ago
Is there a good article, or does anyone have the slightest inkling, about what the real conflict here is? There's a lot of articles about the symptoms, but what's the core issue here?

The board claims Altman lied. Is that it? About what? Did he consistently misinform the board about a ton of different things? Or about one really important issue? Or is this just an excuse disguising the actual issues?

I notice a lot of people in the comments talking about Altman being more about profit than about OpenAI's original mission of developing safe, beneficial AGI. Is Altman threatening that mission or disagreeing with it? It would be really interesting if this was the real issue, but if it was, I can't believe it came out of nowhere like that, and I would expect the board to have a new CEO lined up already and not be fumbling for a new CEO and go for one with no particular AI or ethics background.

Sutskever gets mentioned as the primary force behind firing Altman. Is this a blatant power grab? Or is Sutskever known to have strong opinions about that mission of beneficial AGI?

I feel a bit like I'm expected to divine the nature of an elephant by only feeling a trunk and an ear.

sainez · 2 years ago
I'm not sure what more information people need. The original announcement was pretty clear: https://openai.com/blog/openai-announces-leadership-transiti....

Specifically:

> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.

> it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.

People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.

u/sainez

KarmaCake day169March 5, 2023View Original