Readit News logoReadit News
waffletower · 2 years ago
I actively use llama.cpp and I don't find lack of mention of it as a slight -- it isn't directly affiliated with Meta. While there is tremendous innovation in the project, backwards compatibility is antithetical to the project's culture. I have been updating my models to GGUF, which isn't terrible, but I find I have to invest too much time to stay on top of the rapid, scorched-earth developments. Going to move to containerized checkpoints, as I do for my GPU models, for greater maintainability and consistency.
version_five · 2 years ago
They didn't mention llama.cpp or show it in their picture, that's hopefully an oversight, it feels like a major slight. It's a (the?) major reason for llamas popularity.

I have mixed feelings, llama is great but it's perpetuated it's shitty license. They could have done so much more good if they'd used gpl style licensing, instead they basically subverted open source, using an objectively good model as leverage.

kordlessagain · 2 years ago
A lot of times there can be a feeling of being wrong without it being intentional. In this case I think the mention of AWS being a partner shows intent to put value behind what they are doing for their stakeholders.

The license for Llama 2 is pretty intense, but mirrors that intent by limiting interactions with individuals at scale, as well as limiting anything learned from the model through inference in being used to train another model. I suspect this is because the dataset on which it was trained is the company's IP, which again is for the shareholder's benefit.

The code is open though, I think out of necessity. AI poses a significant challenge for our survival, and making it open is an indication of transparency. They still need to make money at what they do and charge people for using their IP, within reason.

I guess my question would be that, if I used Llama (not the code, but the model itself) to code up a new model, would that be a derivative work?

8note · 2 years ago
Surely it's IP the shareholders have licensed, rather than their own IP.

Aka, my own comments being sublicensed back to me, after I licenced them to Facebook.

refulgentis · 2 years ago
> It's a (the?) major reason for llamas popularity.

Absolutely not. There's a corner of the overall community that hovers it and overperceives it as everyone else only uses it too.

Its great if you have an Apple ARM machine and want to see an M2 Pro do 10 tokens/sec (and what could make an Apple ARM have 30 minute battery life).

I also doubt it's a slight, the only callouts are large commercial collaborations, ex. nVidia, AMD, Google are representative of each of the 3 groups we could assign it

version_five · 2 years ago
I'd be curious if you have any hard data about use. Mine is anecdotal too, but I see that llama.cpp is the very close second highest starred repo with llama on the name, after meta llama. Additionally, all the HF models seem to have ggml / gguf quantized versions . I'm not aware of a competing format for quantized models. There are also python bindings which are used in a lot of projects. What is a competing framework, other than pytorch, that's getting more use? Or is it all just pytorch (and some hf wrappers) and the rest is a rounding error?
rgbrgb · 2 years ago
fwiw I get more like 35-40 tokens/sec on my m1 macbook with a 7B model. That's way faster than I can read or skim. If we can figure out how to focus the expertise in small models, I don't see why it wouldn't be viable for those of us that don't want to share all of our convos with big tech.
paxys · 2 years ago
I'm so happy that Meta was slightly late in the LLM race and so decided to go the chaos route by just open sourcing everything.
Zuiii · 2 years ago
Their models are not open source. They made them available under terms that they can change at any time. Even source available products like Unity have more predictable terms.
skilled · 2 years ago
> There are now over 7,000 projects on GitHub built on or mentioning Llama. New tools, deployment libraries, methods for model evaluation, and even “tiny” versions of Llama are being developed to bring Llama to edge devices and mobile platforms.

Let’s say I want to find the latest or most recent projects on this, is it possible to find them on GitHub based on that criteria?

version_five · 2 years ago
Github has pretty varied filters, you can just search llama and sort by stars or recent activity etc. It doesn't look like it's possible to exclude python, but doing so might get you the "edge" ones. (Except they usually have python utilities for converting pytorch models)
skilled · 2 years ago
Oh? But you can’t sort by date or things like that?
1vuio0pswjnm7 · 2 years ago
danShumway · 2 years ago
This is an important drum to continue to beat, but it needs to be paired with the caveat that we are not legally certain that Llama's weights are actually copyrightable. We're also not certain how much IP protections around trade secrets would apply to weights in this situation. A lot of that is uncertain.

Llama is not Open Source but until we get a court case ruling one way or the other we don't know if it's actually locked-down in the way Facebook intends; and I want to strike a balance between (correctly) pointing out that Facebook is misusing the Open Source label while not ceding to Facebook's claims about how much it can legally constrain people who have never signed a single Llama TOS.

WiSaGaN · 2 years ago
I was actually expecting some comments regarding the 34B Llama 2 model. A quantized 34B model, such as Q5_K_M, might be the sweet spot for a moderate PC in terms of both speed and quality.
jcmontx · 2 years ago
May I ask what are you all doing running a LLM locally?
rgbrgb · 2 years ago
cooking meth, making molotov cocktails, discussing my medical history, sex

ok seriously though I had fun over the weekend chatting with Samantha on a long car ride on my MacBook. We were mostly asking about history.

bytefactory · 2 years ago
Which version are you running, and are you running it through llama.cpp or something? I was just thinking about exactly something like Samantha on the ride home today, and of course it already exists!
dharmab · 2 years ago
Trying to figure out if and how these can be used at companies that have regulatory requirements too strict to use hosted models. Sadly, Meta restricts use of Llama for anything ITAR (as opposed to other TOSes which only restrict weapons and defense).
anothernewdude · 2 years ago
Satisfying HIPAA requirements.
kristianp · 2 years ago
Any MOE models in development at meta?