Readit News logoReadit News
SeanAppleby commented on Show HN: R2R – Open-source framework for production-grade RAG   github.com/SciPhi-AI/R2R... · Posted by u/ocolegro
isoprophlex · 2 years ago
Tangential to the framework itself, I've been thinking about the following in the past few days:

How will the concept of RAG fare in the era of ultra large context windows and sub-quadratic alternatives to attention in transformers?

Another 12 months and we might have million+ token context windows at GPT-3.5 pricing.

For most use cases, does it even make sense to invest in RAG anymore?

SeanAppleby · 2 years ago
I am quite confident that at least some use cases for injecting context in at inference time are going to stay for at least the foreseeable future, regardless of model performance and scaling improvements, because IME those aren't the primary problems the pattern solves for me.

If you are dealing with highly cardinal permissioning models (even just a large number of users who own their own data, but the problem compounds if you have overlapping permissions), then tuning a separate set of layers for every permission set is always going to be wasteful. Trusting a model to have some kind of "understanding" of its permissioning seems plausible assuming some kind of omniscient and perfectly aligned machine, but unrealistic in the foreseeable future and definitely not going to cut it for data regs.

Also, in current status quo I don't believe there is a solution on the horizon for continuous, rapid incremental training in prod, so any data sources that change often are also going to be best addressed in this way. That will most likely be solved at some point, but it doesn't seem imminent, and regardless there will likely be some balancing of cost/performance where context from after the watermark being injected in at inference time might still make sense anyway to keep training costs managable rather than having to iterate training on literally every single interaction.

But yeah, if you're just using it because you have a single collection of context for many users which is too large to fit into the prompt, that seems like it will be subject to the problem you're describing. Although there might still be some benefit to cost/performance optimization both to keeping the prompt short (for cost) and focused (for performance).

SeanAppleby commented on Show HN: R2R – Open-source framework for production-grade RAG   github.com/SciPhi-AI/R2R... · Posted by u/ocolegro
joshring · 2 years ago
Is there a roadmap for planned features in the future? I wouldn't call this a "powerful tool for addressing key challenges in deploying RAG systems" right now. It seems to do the most simple version of RAG that the most basic RAG tutorial teaches someone how to do with a pretty UI over it.

The most key challenges I've faced around RAG are things like:

- Only works on text based modalities (how can I use this with all types of source documents, including images)

- Chunking "well" for the type of document (by paragraph, csvs including header on every chunk, tables in pdfs, diagrams, etc). The rudimentary chunk by character with overlap is demonstrably not very good at retrieval

- the R in rag is really just "how can you do the best possible search for the given query". The approach here is so simple that it is definitely not the best possible search results. It's missing so many known techniques right now like:

    - Generate example queries that the chunk can answer and embed those to search against.

    - Parent document retrieval

    - so many newer better Rag techniques have been talked about and used that are better than chunk based

    - How do you differentiate "needs all source" vs "find in source" questions? Think: Summarize the entire pdf, vs a specific question like how long does it take for light to travel to the moon and back?
- Also other search approaches like fuzzy search/lexical based approaches. And ranking them based on criterial like (user query is one word, use fuzzy search instead of semantic search). Things like that

So far this platform seems to just lock you into a really simple embedding pipeline that only supports the most simple chunk based retrieval. I wouldn't use this unless there was some promise of it actually solving some challenges in RAG.

SeanAppleby · 2 years ago
This is my problem with every end to end system I've seen around this. I find that, even building these systems from scratch, all of the hard parts are just normal data infrastructure problems. The "AI" part takes a small fraction of the effort to deliver even when just building the RAG part directly on top of huggingface/transformers.

I also have dealt with what you're describing, but then it goes much farther when going to prod IME. The ingestion part is even more messy in ways these kinds of platforms don't seem to help with. When managing multiple tools in prod with overlapping and non-constant data sources (say, you have two tools that need to both know the price of a product, which can change at any time), I need both of those to be built on the same source of truth and for that source of truth to be fed by our data infra in real time, where relevant documents need to be replaced in real time in more or less an atomic way.

Then, I have some tools that have varying levels of permissioning on those overlapping data sources, say, you have two tools that exist in a classroom, one that helps the student based on their work, and another that is used by the TA or teacher to help understand students' answers in a large course. They have overlapping data needs on otherwise private data, and this kind of permissioning layer which is pretty trivial in a normal webapp has, IME, had to have been implemented basically from scratch on top of the vector db and retrieval system.

Then experimentation, eval, testing, and releases are the hardest and most underserved. It was only relatively recently that it seemed like anyone even seemed to be talking about eval as a problem to aspire to solve. There's a pretty interesting and novel interplay of the problems of production ML eval, but with potentially sparse data, and conventional unit testing. This is the area we had to put the most of our own thought into for me to feel reasonably confident in putting anything into prod.

FWIW we just built our own internal platform on top of langchain a while back, seemed like a good balance of the right level of abstraction for our use cases, solid productivity gains from shared effort.

I think this is a really interesting problem space, but yeah, I'm skeptical of all of these platforms as they seem to always be promising a lot more than they're delivering. It looks superficially like there has been all of this progress on tooling, but I built a production service based on vector search in 2018 and it really isn't that much easier today. It works better because the models are so much better, but the tools and frameworks don't help that much with the hard parts, to my surprise honestly.

Perhaps I'm just not the user and am being excessively critical, but I keep having to deal with execs and product people throwing these frameworks at us internally without understanding the alignment between what is hard about building these kinds of services in prod and what these kinds of tools make easier vs harder.

SeanAppleby commented on Why to Start a Startup in a Bad Economy (2008)   paulgraham.com/badeconomy... · Posted by u/karimf
projectazorian · 3 years ago
Most VC’s don’t fund peasants. Founders tend to be the children of lesser nobles, with a few social climbers thrown in to add spice.
SeanAppleby · 3 years ago
I grew up working class and found that VCs were more accessible than, in my experience, any other upper class institution I have seen.

When I was younger I had an easier time getting meetings with partners at decent VC funds (including YC) than interviews with Google, for example. And accordingly an easier time getting seed funding than a prestigious internship.

VCs would generally look at prototypes and listen to the story, if you made the initial case concisely and it made sense, whereas other institutions would just throw my resume away with no calls because it didn't match whatever filters. I just wouldn't even get to talk to hiring managers at decent companies.

I didn't raise a really meaningful amount of money, but it seems implausible that VCs/angel investors who were willing to give me five-six figures for pre-seed wouldn't have given me six-seven in the next round if traction was there.

They said they would, and if they wouldn't, they knew the company had a runway such that it would need to raise again, so giving me anything would have been irrational. If I was going to be discriminated against for being from a working class background/not going to a good enough school/being a technical cofounder who didn't study CS, it would probably be at the very beginning.

SeanAppleby commented on Statement from Mark Zuckerberg   facebook.com/login/web/... · Posted by u/jaredwiener
adolph · 4 years ago
What “does his majority control over Facebook” have to do with the accusations about Facebook’s behavior? Would the behavior be ok if it were majority controlled by mutual funds?
SeanAppleby · 4 years ago
The implicit assertion is that this behavior wouldn't happen if not for Zuckerberg personally causing the behavior.

As someone who has worked at large tech companies though, I find that to be an extremely questionable assertion.

FB has incentives. Zuckerberg didn't invent the dynamics surrounding their business, and I don't see how having a faceless bureaucracy in charge would lead to an organization that is more willing to reject its own incentives.

If anything it would seem like having more obscure and diffuse leadership would lead to less accountability, not more.

SeanAppleby commented on Statement from Mark Zuckerberg   facebook.com/login/web/... · Posted by u/jaredwiener
dannykwells · 4 years ago
This is called the Banality of Evil. Look up Hannah Arendt. It is a well established idea.
SeanAppleby · 4 years ago
Eichmann in Jerusalem is the book that coined the phrase for anyone passing through, and it's a pretty wild story.

It's essentially Arendt, a Jewish exile from Berlin who fled the holocaust, wrestling with her realization that Eichmann, who reported to Hitler and organized major portions of the holocaust, wasn't a psychopath, but a completely mundane and thoughtless career focused bureaucrat who was trying to rise in government and believed in doing what you are told, who then organized one of the most evil acts in human history without reflecting on what he was doing.

SeanAppleby commented on A flawed paradigm has derailed the science of obesity   statnews.com/2021/09/13/h... · Posted by u/statnews
zamadatix · 4 years ago
It is an explanation. The article does not propose an alternative to the imbalance explanation rather alternative focus of action to achieve said intake balance. It doesn't even say the existing method of action of enough flat intake reduction as an answer to the imbalance explanation (this action is what the article actually finds flawed not the explanation) won't work just that it is ineffective for the same amount of effort.
SeanAppleby · 4 years ago
I believe the point is that the answer begs another question of "why".

In their drinking example, alcoholism literally is overdrinking, but we accept that there is an underlying mechanism of physical and psychological dependency that exists outside of "they just consciously choose to drink all of the time and other people don't". At this point we generally recognize it as a disease with more nuance than "overdrinking". We recognize that there is a significant neurological component that is, at the point that they are an alcoholic, not under their total conscious control. Their subsconscious is pushing them to do things that the subconsciouses of people without the disease do not push for.

Similarly, you could ask the question, why do some people eat significantly more than they burn? And it then seems not implausible that the problem could be similar to that of alcoholism, that some neurological system is calling for the body to ingest more food, whereas other people's appetites are fundamentally more accurately calibrated at a subconscious level.

Much more speculative, but this would even seem to make sense from a wider lens. Almost our entire evolutionary lineage existed in a world of food scarcity, not overabundance. The selection pressure necessary to evolve a reliable safeguard against overeating would seem plausibly to not be old enough for that mechanism to have evolved to be as reliable and widespread as the one to prevent us from allowing ourselves to starve to death.

The health problems of the 20th and 21st century are still incredibly new from the perspective of the mechanisms that created our instinctual impulses.

SeanAppleby commented on How Google bought Android   arstechnica.com/informati... · Posted by u/samizdis
rejectedandsad · 4 years ago
You can walk into a city. You have to be judged to be in the top 1% of intelligence to work at Google.

People like me are untermensch to folks like you.

SeanAppleby · 4 years ago
I promise you that you have a vastly higher opinion of people who work at google than people who work at google do.
SeanAppleby commented on How People Get Rich Now   paulgraham.com/richnow.ht... · Posted by u/prakhargurunani
Balgair · 4 years ago
Are the fake-meat companies tech companies? I thought they are more like contract manufacturers, brewers, or other industrial foodstuffs.

No doubts on the access to cheap debt, though.

SeanAppleby · 4 years ago
I think it depends purely on your definition of "tech".

Impossible engineering soybeans to produce more heme to make fake meat behave more like meat is a technology, in that it is a novel innovation applied to solve a real world problem.

But in modern common parlance "tech" tends to mean either that a company's offering is either entirely or heavily augmented by new software, or that the company has ties to a specific network of talent/investors/etc, or that the company has very low incremental costs per user. In those cases, they might not be a tech company.

SeanAppleby commented on Harvard University Won’t Require SAT, ACT for Admissions Next Year   wsj.com/articles/harvard-... · Posted by u/zachthewf
Spivak · 5 years ago
I think there’s a silver lining to all of this behavior in that it demonstrates that nothing really matters when it comes to determining if you’ll be successful in college.

Test scores don’t prove much except who had the resources, time, and motivation to study the most. You could replace the SAT and ACT with obscure movie trivia and you’d see roughly the same distribution of scores and it would have the same predictive power. The people who signal that they’re willing to put it a lot of time, money, and effort into their education do well… surprised pikachu.

There’s no such thing as qualified for college outside of a genuine desire and motivation to do it.

So I don’t see any sort of contradiction in colleges admitting people based on novelty, coolness, and straight-up effort.

SeanAppleby · 5 years ago
My understanding is that SAT scores are significantly correlated with wealth (resources, time, externally imposed motivation), but not by as much as other admissions criteria, and that equity in educational attainment dramatically improved after standardized tests were introduced.
SeanAppleby commented on Young U.S. men having a lot less sex in the 21st century, study shows   reuters.com/article/us-us... · Posted by u/pseudolus
watwut · 5 years ago
Having home should not be requirement for having children. You quite literally don't need it.
SeanAppleby · 5 years ago
People want to provide a stable home for their children.

Having to move can mean your kid has to find all new friends, and if buying a home is a complete impossibility for you, you might be further down Mazlowe's hierarchy than starting a family, and might have a life that's more financially brittle than you'd want it to be when you're taking care of kids.

u/SeanAppleby

KarmaCake day344January 12, 2016View Original