Readit News logoReadit News
alephnan · 2 years ago
By time, they’re talking about the writing style of a specific time period.

Feels like a click bait title. Of course language model weights encode different writing styles. The fact that you can lift out a vector to stylize writing is also more interesting, but that’s also nothing newly discovered here. It should be obvious that this is possible given that you can prompt ChatGPT to change its writing style.

n2d4 · 2 years ago
Besides what the sibling comment said, what's most interesting (imo) is that you can manipulate the vectors like that. The fact that you can average the vectors for January and March, and get better results for February, is pretty surprising to me.
macleginn · 2 years ago
This also generalises: https://arxiv.org/abs/2302.04863
jimbobthrowawy · 2 years ago
Generalizing vectors in generative models seems like an incredibly useful thing to know about, if you want to use them more effectively. Blew my mind when I saw someone demonstrate doing vector math on a GAN a couple years back to move an "input image" around the space of outputs.

Maybe this could be useful for singling out post-LLM text and generating output that excludes it.

cush · 2 years ago
Why would it pertain only to writing style?
k__ · 2 years ago
Interesting that writing style works, but other reflective actions don't.

Like, "only use the the 2000 most common words of the English language" or "the response should be 500 words long".

n2d4 · 2 years ago
It does work on other reflective actions, parent is just wrong; in the paper, they specifically run the experiment on a dataset of political affiliation over time
mycall · 2 years ago
From the title, I was thinking "of course the neural network of the LLM is a [cause-effect] sequence of words" thus time is encoded in each connection.
convexstrictly · 2 years ago
fnordpiglet · 2 years ago
solardev · 2 years ago
Thanks!
behnamoh · 2 years ago
the X version worked fine for me. I don't know what you want to achieve by posting a link to a third party website.
cwmoore · 2 years ago
I think I like time. Though spectral, indeterminate, presently a fixture, essential moments last forever but occur daily. Why would any network encode time if it were all just a crystal vase?
ackbar03 · 2 years ago
what are you on?
phito · 2 years ago
Crystal vase
haltist · 2 years ago
Because people have to publish papers, that's why.

Deleted Comment

IlliOnato · 2 years ago
Beautiful. Thoughtful. Clever. Wise. In brightness like the face of Odin, in hearing like Moo, in spring and morning most goodly delight. Doing poetic justice to itself. Bringing up crystal vases! Per-bloody-fect.
jiggawatts · 2 years ago
Sooo… if I’m reading this right, it’s possible to force an AI into extrapolating into the future. As in, it’ll answer as-if its training was based on data from future years.

Obviously this isn’t time travel, but more of a zeitgeist extrapolation.

I would expect that if an AI was made to answer like it’s from December 2024 it would talk a lot about the US election but it wouldn’t know who won — just that a “race is on.”

This could have actual utility: predicting trends, fads, new market opportunities, etc…

n2d4 · 2 years ago
Kind of. You still need some data from the "future" to extrapolate it: In the paper, they take an LLM finetuned on 2015 political affiliation data, and add to it the difference between 2020 and 2015 Twitter data, and show that the performance is better when the new model is asked about 2020 political affiliation.

So, the LLM still needs to know about 2020 from somewhere. In a way, you teach it about the task, then separately you teach it about 2020, and this method can combine that to make it solve the task for year 2020.

behnamoh · 2 years ago
nah, this is not what they're talking about.
dartos · 2 years ago
I don’t think it’d be nearly as accurate as purpose built future predictors.

LLMs aren’t a silver bullet for everything.

gmerc · 2 years ago
Ah, the bitter lesson teams it’s ugly head
electrondood · 2 years ago
> LLMs aren’t a silver bullet for everything.

Please explain this to my Product org.

habitue · 2 years ago
Maybe less zeitgeist, but it would be really interesting to see what extrapolating future writing styles are like.
spacecadet · 2 years ago
Here ya go: lorizzle.nl
bkfh · 2 years ago
Can someone ELI5 this?
LectronPusher · 2 years ago
A vector is a position in a dimensional space. In 2D space a vector is a point (x, y) like (1, 3) or (-2.5, 7.39). We can also do simple math on vectors like addition: (1, 3) + (2, -1) = (3, 2).

LLMs treat language as combinations of vectors of a very high dimension -- (x, y, z, a, b, c, d, ...). The neat thing is that we can combine these just like the 2D vectors and get meaningful results. If we have the vectors for the concepts "King" and "Woman", adding them gives a vector close to the one for "Queen"!

Once you know this, you can extrapolate and look for ways to categorize groups of vectors and combine them in new ways. As I read it, this research is about finding the vector weights for text from specific time periods -- i.e. January of 2021 -- and comparing them to the vectors for text from a different period -- i.e. March of 2021. It seems that all the operations are still meaningful, you can even do something like averaging vectors in January and March and getting ones that look like vectors in February!

simne · 2 years ago
Well, I think this could become one of most underestimated idea in LLM development.

To be honest, it is relatively obvious idea, to make vectors from timestamps and feed them to LLMs, but for some strange reason, nobody made this before and looks like, this is mostly unnoticed in NN community.

airocker · 2 years ago
I think a more general way to think about it would be to add any data and reduce weight. For eg, if we want to create geography vectors, we would add all geography data to fine tune and then take a difference. Now add this to any other model with same architecture, and you have a geography capable llm.
mjvmroz · 2 years ago
I think the general case is far more interesting than time specifically. There are cool functor/analogy ideas here.