Readit News logoReadit News

Dead Comment

Deleted Comment

zingelshuher commented on Google Is About to Change Everything–and Hopes You Won't Find Out   slate.com/technology/2024... · Posted by u/rntn
simfree · a year ago
Google knowingly made their search results shittier and shittier for years in pursuit of improved KPIs, to the point that alternative search engines are better, and so are LLMs that occasionally hallucinate or outright crash.
zingelshuher · a year ago
> Google knowingly made their search results shittier and shittier for years

unfortunately this extends to youtube too. now they have a new shitty trick. you click on the link and they randomly give you a completely different video.

zingelshuher commented on Google Is About to Change Everything–and Hopes You Won't Find Out   slate.com/technology/2024... · Posted by u/rntn
verdverm · a year ago
Google Cloud Vertex AI let's you run any of the open source models on Google infra, they even have Claude models available. MSFT likely has similar in their cloud.

Where are people going to run their models? I for one will choose the cloud I already use. It has APIs for the big models and simple deployment of both open source and other proprietary models.

This is completely separate from providing end users a service. How many people self host or run their own alternatives when there are managed services available? It is unlikely people are going to switch en mass to open source models, especially while there is a price war on SoTA models. It's becoming far cheaper to call a SoTA API than have an always on open source model.

From my experience, running a small model locally was both slower (tokens/sec plus overall system slowdown) and had worse results. I switched to cloud based APIs and will likely not consider reversing this decision. Multiple orders of magnitude improvements would need to happen in both performance and quality

zingelshuher · a year ago
> It is unlikely people are going to switch en mass to open source models

It depends on the task at hands. For complex tasks no way personal computer can compete with giants data centers. But, as soon as software becomes available, users will gladly switch to local AI for personal data search / classification / summation, etc. This market is potentially huge, for private sensitive there is no other way.

zingelshuher commented on Even Portland now is banning camping, part of the West Coast retreat   seattletimes.com/seattle-... · Posted by u/wallflower
washedup · a year ago
I think this gets to a larger national economic issue: things aren’t great for most people, and when things get really bad, you move somewhere nice.

The many programs developed to tackle homelessness have been built around specific budgets/goals/KPIs, and are not able to handle the continuous migration of people with nowhere to go, and inherently political. I’m sure most politicians mean well.

Sure, migration trends, especially for CA say otherwise.. but there is a large swath of people that governing bodies are bad at tracking.

TLDR: another example of poorly designed government intervention due to misunderstanding of the dynamics behind the process (homelessness)

zingelshuher · a year ago
> poorly designed government intervention due to misunderstanding of the dynamics behind the process (homelessness)

Major drive is easy to understand, just cross the border and you are homeless on full support. Millions did with the help of dems, future voters. No language, no jobs, no skills. Of course they will vote for free food if they get this option. I.e. for dems, which was the whole idea.

zingelshuher commented on Meta Llama 3   llama.meta.com/llama3/... · Posted by u/bratao
HarHarVeryFunny · a year ago
Yeah, but not for image generation unfortunately

I've never had a FaceBook account, and really don't trust them regarding privacy

zingelshuher · a year ago
had to upvote this
zingelshuher commented on Meta Llama 3   llama.meta.com/llama3/... · Posted by u/bratao
cm2012 · a year ago
AI is taking marketshare from search slowly. More and more people will go to the AI to find things and not a search bar. It will be a crisis for Google in 5-10 years.
zingelshuher · a year ago
Only if it does nothing. In fact Google is one of the major players in LLM field. The winner is hard to predict, chip makers likely ;) Everybody jumped on bandwagon, Amazon is jumping...
zingelshuher commented on Meta Llama 3   llama.meta.com/llama3/... · Posted by u/bratao
exoverito · a year ago
Anecdotally speaking I use google search much less frequently and instead opt for GPT4. This is also what a number of my colleagues are doing as well.
zingelshuher · a year ago
I often use ChatGPT4 for technical info. It's easier then scrolling through pages whet it works. But.. the accuracy is inconsistent, to put it mildly. Sometimes it gets stuck on wrong idea.

Interesting how far LLMs can get? Looks like we are close to scale-up limit. It's technically difficult to get bigger models. The way to go probably is to add assisting sub-modules. Examples would be web search, have it already. Database of facts, similar to search. Compilers, image analyzers, etc. With this approach LLM is only responsible for generic decisions and doesn't need to be that big. No need to memorize all data. Even logic can be partially outsourced to sub-module.

zingelshuher commented on Meta Llama 3   llama.meta.com/llama3/... · Posted by u/bratao
ktzar · a year ago
even if they released them, wouldn't it be prohibitively expensive to reproduce the weights?
zingelshuher · a year ago
It's impossible. Meta itself cannot reproduce the model. Because training is randomized and that info is lost. First samples a coming at random. Second there are often drop-out layers, they generate random pattern which exists only on GPU during training for the duration of a single sample. Nobody saves them, it would take much more than training data. If someone tries to re-train the patterns will be different, which results in different weight and divergence from the beginning. Model will converge to something completely different. With close behavior if training was stable. LLMs are stable.

So, no way to reproduce the model. This requirement for 'open source' is absurd. It cannot be reliably done even for small models due to GPU internal randomness. Only the smallest trained on CPU in single thread. Only academia will be interested.

zingelshuher commented on Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length   arxiv.org/abs/2404.08801... · Posted by u/amichail
qwertox · a year ago
I was just chatting with ChatGPT about unlimited context length, and even if you theoretically could archive to have a personal assistant this way, one which would know all your chat history, an unlimited context length doesn't seem efficient enough.

It would make more sense to create a new context every day and integrate it into the model at night. Or a every day a new context of the aggregated last several days. Giving it time to sleep on it every day and it being capable to use it the next day without it needing to get passed in the context again.

zingelshuher · a year ago
If we can keep unlimited memory, but use only a selected relevant subset in each chat session. This should help. Of course the key is 'selected', it's another big problem. Like short memory. Probably we can make summaries from different perspectives on idle or 'sleep' time. Training into model is very expensive, can be done only from time to time. Better to add only the most important, or most used fragments. It likely impossible to do on mobile robot, sort of 'thin agent'. If done on supercomputer we can aggregate new knowledge collected by all agents. Then push new model back to them. All this is sort of engineering approach.

u/zingelshuher

KarmaCake day1February 6, 2024View Original