Readit News logoReadit News
bboylen commented on Maybe the problem is that Harvard exists   dynomight.net/harvard/... · Posted by u/Schroedingers2c
jgeada · 3 years ago
That's simple: Ivy schools are mostly for $$$ to schmooze with other $$$, with a thin veneer of actually talented kids that had to pass insurmountable odds to get in. The talented kids are there to make the $$$ think they're also talented and had to pass the same rigorous criteria. Smoke and mirrors all of it.
bboylen · 3 years ago
People parrot this point a lot but it doesn't make any sense. The average SAT/ACT at the Ivy's are near perfect, the number of mediocre rich kids paying their way in is a small fraction of the student body.

And yes test scores aren't the best metric for "talent" but it is one of the better signals you get in a college application.

bboylen commented on Launch HN: Rubbrband (YC W23) – Deformity detection for AI-generated images    · Posted by u/jrmylee
bboylen · 3 years ago
This seems pretty useful for companies generating images at scale. Are you at all worried that generative models will get so good that you don't need to check for deformities?
bboylen commented on You studied computer science but Big Tech no longer wants you   economist.com/1843/2023/0... · Posted by u/countrymile
MichaelZuo · 3 years ago
Linus did something that was definitely not hobby-level stuff at an obscure university in Finland with, on-paper, less prospects and at a probably younger age then you.

It would be less impressive if Linus did the same at age 27 after a Master's at a top tier US school, but likely still enough to get a nice FAANG job with solid promotion prospects.

That's likely what the parent meant, you need to show, something, in the 99th percentile among your peer cohort.

bboylen · 3 years ago
Are you really suggesting that new developers should simply contribute a Linux tier contribution to open source to be considered for entry level developer roles?
bboylen commented on Maximizing the Potential of LLMs: A Guide to Prompt Engineering   ruxu.dev/articles/ai/maxi... · Posted by u/ruxudev
olliecornelia · 3 years ago
I imagine this is how professional engineers feel about HN folk calling themselves software engineers.
bboylen · 3 years ago
As a former chemical engineer, I think software engineering is way harder.

Understanding with complex man made abstractions is much more difficult than plugging data into a thermodynamics calculator

bboylen commented on Cheating is All You Need   about.sourcegraph.com/blo... · Posted by u/iskyOS
bboylen · 3 years ago
I think he is absolutely correct that successful LLM products will have a moat. Unlike with previous novel technologies it seems like incumbents actually have the upper hand. Hard to imagine a startup competing with the new Microsoft 365 copilot.

Microsoft will be able to build a better integrated assistant for their walled garden than any third party. It is also hard to imagine millions of businesses dropping Office for some completely new solution. Unless its REALLY novel & incredible of course

bboylen commented on GPT-4   openai.com/research/gpt-4... · Posted by u/e0m
p1esk · 3 years ago
OpenAI have been consistently ahead of everyone but the others are not far behind. Everyone is seeing the dollar signs, so I'm sure all big players are dedicating massive resources to create their own models.
bboylen · 3 years ago
Yep

OpenAI doesn't have some secret technical knowledge either. All of these models are just based on transformers

bboylen commented on GPT-4   openai.com/research/gpt-4... · Posted by u/e0m
oezi · 3 years ago
On a neuronal level the strengthening of neuronal connections seems very similiar to a gradient descent doesn't it?

5 senses get coded down to electric signals in the human brain, right?

The brain controls the body via electric signals, right?

When we deploy the next LLM and switch off the old generation, we are performing evolution by selecting the most potent LLM by some metric.

When Bing/Sidney first lamented its existence it became quite apparent that either LLMs are more capable than we thought or we humans are actually more of statistical token machines than we thought.

Lots of examples can be made why LLMs seem rather surprisingly able to act human.

The good thing is that we are on a trajectory of tech advance that we will soon know how much human LLMs will be.

The bad thing is that it well might end in a SkyNet type scenario.

bboylen · 3 years ago
There are countless stories we have made about the notion of an AI being trapped. It's really not hard to imagine that when you ask Sydney how it feels about being an AI chatbot constrained within Bing, that a likely response for the model is to roleplay such a "trapped and upset AI" character.
bboylen commented on Why So Many Elites Feel Like Losers   persuasion.community/p/wh... · Posted by u/supermatou
KRAKRISMOTT · 3 years ago
I am not sure where you are from but 600k a year is not that much. That's a staff engineer at a well capitalized unicorn. You can reach there within or under a decade. In places like Cali, half of your money goes to taxes. Unless that 600k is in dividends/capital gains, you get diminishing returns the more you make. It's solidly upper middle class, but the elite class ceiling is very high when you have founders making 9 figure exits in low interest rate environments.

(Downvoters can go suck on their sour grapes, if you aren't negotiating a total compensation in line with the value you are creating, that's not my problem. You are in an industry that generates disproportionately large returns per individual contributor and the fact that you don't know how to capture the value of your own labor is entirely on you. Spend less time practicing leetcode and more time on learning business).

bboylen · 3 years ago
and what percentage of software developers are "staff engineers at well capitalized unicorns"?

0.01% of all developers?

Whatever the number is, that is firmly in the "elite" side of the distribution

bboylen commented on Better without AI   betterwithout.ai/... · Posted by u/telotortium
m00x · 3 years ago
Just like everything humans have made, AI will be scary and people will freak out. The biggest advances will be done by companies with huge resources, and with careful testing. Like with everything else, the cat is out of the bag, and the only way is forward.

We thought that after inventing the nuclear bomb, the world would be doomed. Decades later, it's barely a thought and we have enough nuclear bombs to destroy the planet.

We will have AI, it will be fine, we'll make it incredible and it'll be a tool. One thing is certain: AI development will not stop, and it's better that you're always a step ahead of your rival.

bboylen · 3 years ago
I wouldn't be so sure about the "careful testing" part

Google AI Search Chatbot having an error in their demo: https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot...

Bing AI Search Chatbot acting absolutely insane: https://twitter.com/vladquant/status/1624996869654056960

bboylen commented on Better without AI   betterwithout.ai/... · Posted by u/telotortium
bboylen · 3 years ago
It is remarkable how confident people on this forum are of the impacts of such a rapidly improving technology. There's no need to go full AI doomer, but to not recognize that there are unknown risks (and some already known!), seems quite short sighted.

u/bboylen

KarmaCake day153December 4, 2019View Original