Readit News logoReadit News
vacuumcl commented on The Dead Planet Theory   arealsociety.substack.com... · Posted by u/sebg
vacuumcl · 6 months ago
Reading this gives me some slight existential dread, since most of my time here I just read other people’s comments.

In any case, I think a big barrier to starting things can often also just be the fear of failure or of wasting time. I spent a lot of time making electronic music as a hobby, and am probably better than most people at understanding and playing music, but for music to play a meaningful impact in my life I would need to put in a lot more work still.

On the other hand, I studied physics and mathematics far beyond the average person (getting a PhD and publishing multiple papers), but I had the support of the university and the academic environment to give me that extra push to do it.

There are so many things I would like to pursue in my free time. Building a small startup, writing a book, making YouTube videos, etc. I know that the most important thing is to just start, but the decision paralysis, and uncertainty about whether it will work out in the end can definitely be a barrier, since I can also just go out with friends and enjoy my life instead of spending time solitary.

vacuumcl commented on Meta Llama 3   llama.meta.com/llama3/... · Posted by u/bratao
refulgentis · a year ago
Honestly, I swear to god, been working 12 hours a day with these for a year now, llama.cpp, Claude, OpenAI, Mistral, Gemini:

The long context window isn't worth much and is currently creating more problems than it's worth for the bigs, with their "unlimited" use pricing models.

Let's take Claude 3's web UI as an example. We build it, and go the obvious route: we simply use as much of the context as possible, given chat history.

Well, now once you're 50-100K tokens in, the initial prefill takes forever, O(10 seconds). Now we have to display a warning whenever that is the case.

Now we're generating an extreme amount of load on GPUs for prefill, and it's extremely unlikely it's helpful. Writing code? Previous messages are likely to be ones that needed revisions. The input cost is ~$0.02 / 1000 tokens and it's not arbitrary/free, prefill is expensive and on the GPU.

Less expensive than inference, but not that much. So now we're burning ~$2 worth of GPU time for the 100K conversation. And all of the bigs use a pricing model of a flat fee per month.

Now, even our _paid_ customers have to take message limits on all our models. (this is true, Anthropic quietly introduced them end of last week)

Functionally:

Output limit is 4096 tokens, so tasks that are a map function (ex. reword Moby Dick in Zoomer), need the input split into 4096 tokens anyway.

The only use cases I've seen thus far that _legitimately_ benefit are needle in a haystack stuff, video with Gemini, or cases with huuuuuge inputs and small outputs, like, put 6.5 Harry Potter books into Gemini and get a Mermaid diagram out connecting characters.

vacuumcl · a year ago
As a user, I've been putting in some long mathematical research papers and asking detailed questions about them in order to understand certain parts better. I feel some benefit from it because it can access the full context of the paper so it is less likely to misunderstand notation that was defined earlier etc.
vacuumcl commented on Meta Llama 3   llama.meta.com/llama3/... · Posted by u/bratao
hermesheet · a year ago
Lots of great details in the blog: https://ai.meta.com/blog/meta-llama-3/

Looks like there's a 400B version coming up that will be much better than GPT-4 and Claude Opus too. Decentralization and OSS for the win!

vacuumcl · a year ago
Comparing to the numbers here https://www.anthropic.com/news/claude-3-family the ones of Llama 400B seem slightly lower, but of course it's just a checkpoint that they benchmarked and they are still training further.
vacuumcl commented on Our next-generation model: Gemini 1.5   blog.google/technology/ai... · Posted by u/todsacerdoti
icoder · 2 years ago
Niet OP maar als ik als mezelf schrijf, dan denk ik niet dat je me zomaar begrijpt ;)
vacuumcl · 2 years ago
Ik begrijp het prima hoor ;)
vacuumcl commented on Our next-generation model: Gemini 1.5   blog.google/technology/ai... · Posted by u/todsacerdoti
dingclancy · 2 years ago
Essentially, the focus seems to be on leveraging the media buzz around Gemini 1.0 by highlighting the development of version 1.5. While GPT-4's position relative to Gemini 1.5 remains unclear, and the specifics of ChatGPT 4.5 are yet to be disclosed, it's worth noting that no official release has taken place until the functionality is directly accessible in user chats.

Google appears to be making strides in catching up.

When it comes to my personal workflow and accomplishing tasks, I still find ChatGPT to be the most effective tool. My familiarity with its features has made it indispensable. The integration of mentions and tailored GPTs seamlessly enhances my workflow.

While Gemini may match the foundational capabilities of LLMs, it falls short in delivering a product that efficiently aids in task completion.

vacuumcl · 2 years ago
I don't mean this in a bad way, but when I read a comment like yours which includes phrases like "seamlessly enhances my workflow" and "efficiently aids in task completion", I can't help but feel like it's ChatGPT-generated, and if so I think it's a shame, just write like yourself.

But maybe you do, and I am seeing patterns in sand.

vacuumcl commented on Deluge, a portable sequencer, synthesizer and sampler   github.com/SynthstromAudi... · Posted by u/eating555
rosmax_1337 · 2 years ago
Once again I was bit by not reading the link thoroughly enough. I hope I'm not the only one that's happened to. ;)

Nonetheless, the problem of naming things actually stands. Trademark law aside, imagine if you tried to make a program like a torrent client called "Volvo". Names exist in a global namespace, and things will collide, and that's bad for everyone involved.

Had I made the original hardware product, I would have named it the "Synthstrom Deluge" to avoid this problem.

vacuumcl · 2 years ago
People refer to it as Synthstrom Deluge all the time. The product is called Deluge and the company is Synthstrom. It would be strange if they called their product the Synthstrom Synthstrom Deluge wouldn’t it?
vacuumcl commented on In the gut's 'second brain,' key agents of health emerge   quantamagazine.org/in-the... · Posted by u/rzk
belugacat · 2 years ago
This is why the reductionist argument of "your brain is reducible to a computer with inputs/outputs like any other, all we have to do is reimplement it" of AGI proponents always fell flat to me.

It's now becoming clear that we can't just take the brain in isolation, treating the spinal nerve like a PCI-E lane - the gut has to come with it. And if the gut comes with it, all the other organs (skin top of the list) probably do as well.

Now to model a human brain, you need to model an entire human, along with all the complexity of the microbiota, interactions of the organs with the environment... it all just falls apart.

vacuumcl · 2 years ago
I don't see why this changes anything. There are some, sort-of intelligent networks of cells in the gut that help in digestion and other processes. Doesn't change the fact that consciousness resides in the brain.
vacuumcl commented on The Sad Bastard Cookbook   traumbooks.itch.io/the-sa... · Posted by u/throwaway154
ehPReth · 2 years ago
I wish I could just take one pill a day (or three) for nutrition (and satiating hunger) and not have to eat meals at all. That would be amazing, not to have to deal with food. Buying, preparing, ordering, over eating.
vacuumcl · 2 years ago
One pill a day perhaps not, but you can definitely sustain yourself on meal replacement products like the ones from Soylent or Huel. I get about 1/3rd my calories from those and it saves a lot of time.
vacuumcl commented on The History and Future of Charisma   noemamag.com/the-secret-h... · Posted by u/Caiero
nico · 2 years ago
Fascinating

For anyone interested in a practical book about charisma and how to learn/develop it, I highly recommend this book that completely changed my life: The Charisma Myth by Olivia Fox Cabane

vacuumcl · 2 years ago
I read that book as well when I was 20, and while it was helpful I was also surprised watching a YouTube talk by her many years later, and not finding her particularly charismatic!
vacuumcl commented on How can some infinities be bigger than others?   quantamagazine.org/how-ca... · Posted by u/digital55
jdkee · 2 years ago
"- It is true there are more points on a plane then on a line (Cantor's theorem.)"

There is a bijection between the points on a line and the points on a plane or in any n-dimensional space.

vacuumcl · 2 years ago
Yes, that was a bad mistake on my part. Thanks for pointing it out!..

u/vacuumcl

KarmaCake day60October 20, 2022View Original