Readit News logoReadit News
qualudeheart commented on AlphaCode2 powered by Gemini performs better than 85% of programmers   twitter.com/RemiLeblond/s... · Posted by u/lairv
qualudeheart · 2 years ago
Most of my posts here have been satirical or exaggerated. Among those not so has been my aphorism that DeepMind can train a new model faster than you can go back to grad school. You‘d have went back to school for computer science for a Masters‘ degree and by the time you slithered out of that box, AlphaCode 2 slithered out of its box and into your cubicle. No more space for you!
qualudeheart commented on XB, or eXtreme Bullshitting: a less misleading name for LLMs   twitter.com/jawj/status/1... · Posted by u/gmac
a1j9o94 · 3 years ago
I appreciate your perspective, but I have a slightly different view. First, it's important to recognize that the term "language model," as it's used in the context of machine learning, does not claim to fully capture the entirety of human language ability. Rather, it refers to a model's ability to predict or generate text that follows the patterns found in the language data on which it was trained.

You mention that a language model can't model language, only text, because it only observes examples of text, not language. However, I would argue that these text examples are in fact representations of human language ability. The text that language models are trained on is a product of human language ability. This text, while not a perfect representation of language, is a direct outcome of language use and thus carries within it the patterns and structures that language models can learn.

Moreover, models like Transformers do have a sort of latent space - the embeddings space. This is a continuous space where words and phrases are represented as vectors. The distances and directions between vectors in this space capture semantic and syntactic relationships, which suggests that these models are indeed learning some aspects of language, not just text.

On top of this, many of these models are trained on diverse data sources, including transcriptions of spoken conversations. This introduces elements of pragmatics, or how language is used in real-world conversations, into the training data.

Finally, the translation capabilities of these models further suggest that they are capturing something beyond mere text. They are able to translate between different languages, which implies that they are learning some underlying linguistic structures that are shared across languages.

Again, it's important to stress that these models are far from capturing the full complexity of human language ability. However, to say that they only model text and not language seems to me an oversimplification. They are learning patterns and structures in the data that are intrinsically tied to human language use, and so in a sense, they are modeling aspects of language.

qualudeheart · 3 years ago
This reads AI generated.
qualudeheart commented on Creatures That Don’t Conform   emergencemagazine.org/ess... · Posted by u/sohkamyung
spangry · 3 years ago
I'm quite interested to hear about the connection between LGBT ideology and Gnosticism / transhumanism. Care to elaborate lo_zamoyski?
qualudeheart · 3 years ago
That’s just James Lindsay’s pseudointellectual tripe, and has further roots in some early twentieth century political theorists. You can’t trust Lindsay because he’s a political operative first and a scholar second. I don’t know of any scholars of gnosticism who think gnosticism is connected to LGBT in a serious way.
qualudeheart commented on IBM to pause hiring in plan to replace 7,800 jobs with AI   finance.yahoo.com/news/ib... · Posted by u/isaacfrond
seu · 3 years ago
AI will not replace humans. IBM's CEOs and managers will replace humans with AI. Shareholders will force IBM to replace humans with AI. Let us stop blaming the only thing that is _not_ making any decision. Capitalism is a powerful idea that will kill the same people that vouch for it.
qualudeheart · 3 years ago
Managers will do every job.
qualudeheart commented on IBM to pause hiring in plan to replace 7,800 jobs with AI   finance.yahoo.com/news/ib... · Posted by u/isaacfrond
wesapien · 3 years ago
A lot of corporations are replacing humans jobs instead of enhancing them with new tech like what the tech gurus say during their TED talks about the benefits of X tech. Another reason to flip cars.
qualudeheart · 3 years ago
Smart Money flips houses. The Smartest Money flips GPUs.
qualudeheart commented on BabyAGI: An Autonomous and Self-Improving Agent   github.com/oliveirabruno0... · Posted by u/headalgorithm
spacetime_cmplx · 3 years ago
Unless you have several hundred million documents, just write a simple encoder that serializes the embedding vectors to a flat binary file.

Writing code from scratch to process and search 200k unstructured documents -- parsing, cleaning, chunking, OpenAI embedding API, serialization code, linear search with cosine similarity, and the actual time to debug, test and run all this -- took me less than 3 hours in Go. The flat binary representation of all vectors is under 500 MB. I even went ahead and made it mmap-friendly for the fun of it even though I could read it into all into memory.

Even the dumb linear search I wrote takes just 20-30ms per query on my Macbook for the 200k documents. The search results are fantastic.

qualudeheart · 3 years ago
Could you share the code with us?
qualudeheart commented on BabyAGI: An Autonomous and Self-Improving Agent   github.com/oliveirabruno0... · Posted by u/headalgorithm
93po · 3 years ago
I feel like this is missing the point. There is no amount of human effort that can do things like break modern encryption in any reasonable timescale. There is an upper bound on what can be comprehended, and especially in the timescale of decades, by any amount of humans. An ASI would have the ability to comprehend and problem solve on a level thousands of times higher than a human, and if we had the source code for an ASI today, without any previous training, it could likely both train itself on the entire world's body of knowledge and break AES-strength encryption in a matter of seconds.

Encryption is just one example. Its ability across the entirety of math and science would be equally powerful.

qualudeheart · 3 years ago
Maybe it would take a sabbatical to heal its trauma like real humans. Maybe it would go to therapy.
qualudeheart commented on BabyAGI: An Autonomous and Self-Improving Agent   github.com/oliveirabruno0... · Posted by u/headalgorithm
AnimalMuppet · 3 years ago
Well... humans who pick their own "training data" can wind up in echo chambers and, shall we say, diverging from reality.

I don't think we're on the threshold of AGI, but it's interesting that the wanna-be AIs are running into human issues...

qualudeheart · 3 years ago
Humans who pick their own training data can make greats results by chosing which courses at college or grad school to attend.
qualudeheart commented on BabyAGI: An Autonomous and Self-Improving Agent   github.com/oliveirabruno0... · Posted by u/headalgorithm
corobo · 3 years ago
Has anyone got this to actually do anything at all yet?

I see all of the half demos where it doesn't complete anything, I've tried it myself and.. well, if we're being honest it was shite. I've seen a whole load of tweet threads saying what it could be used for..

Literally just looking for one example of a successful run. Anything at all.

I can definitely see that there may be potential (if not this then the ideas that come off the back of this) but even I don't have a real use case for it yet, I'm just tinkering.

I guess my XY question: Am I being suckered into the web3 of AI? Lots of buzz, no use case.

qualudeheart · 3 years ago
There are use cases as far as the eye can see. How about copywriting?
qualudeheart commented on BabyAGI: An Autonomous and Self-Improving Agent   github.com/oliveirabruno0... · Posted by u/headalgorithm
freediver · 3 years ago
This is not AGI by any stretch of imagination. It is does not even appear to be a step on the path to AGI.

Furthermore, due to autoregressive nature of GPT models, the more auto-gpt generates (the more it works, the more tasks it performs..) the chance of things going off the right path grow exponentially, and then it is 'doomed' to the end [1].

Thus, chance of this being actually useful for anything longer than what a simple prompt can already do with a tool like ChatGPT is very low.

The end result is an impressive concept but a practically unusable tool. And the problem, in general, is that as the auto-gpt improves (which it will at impressive pace), so will our ambition in using it, which will lead to constant disappointment and what we have today will be generally how we feel about it in the future. Always needing "just a bit more", but never really there.

We already have a "baby AGI" that has been deployed in production environment for a few years - it is called Tesla self driving. It was supposed to get us from point a to point b completely autonomously. And for 6 years now it has been almost "almost there", but never really there (and arguably never will be).

What this does though, is create and inflate a giant FOMO, and the best way of dealing with FOMOs (long term) is to stay on the firm ground, observe, wait for clarity and the right action.

[1] Watch in particular Yann LeCun's presentation at https://www.youtube.com/watch?v=x10964w00zk

qualudeheart · 3 years ago
Can the exponential probability increase mentioned by LeCun be mitigated, for example with an approximation?

Lots of algorithms like nearest neighbor search are O(n^2) but algorithms for approximate results run in sublinear time.

u/qualudeheart

KarmaCake day819January 31, 2021
About
~Generally Intelligent Reinforcement Learner~

Zizzlin’ like mai GPUzz >_<

View Original