Readit News logoReadit News
itsaquicknote commented on LangChain Announces 10M Seed Round   blog.langchain.dev/announ... · Posted by u/lysecret
dbish · 3 years ago
Interestingly this is shortly after another huge “seed” round in this space from fixie.ai (17mm). A lot of money being thrown at making it easier to chain/give LLMs access to other applications and tools.
itsaquicknote · 3 years ago
Yeah, wow, a 17M seed round and not a hint of irony. Ferocious. Capital is DESPERATE to throw money into anything that appears to be related to LLMs. "Value creation event of our lifetimes" etc. There's grift money to made here and I'll be damned if some decent proportion of the "API wrapper + template UI" startups masquerading as AI companies aren't cashing in. Not sure I blame them.
itsaquicknote commented on LangChain Announces 10M Seed Round   blog.langchain.dev/announ... · Posted by u/lysecret
itsaquicknote · 3 years ago
The landgrab going on in this space is ferocious. If you weren't convinced this is mania, and a 10M "seed" round for Langchain doesn't do it for you, nothing will. Well done Harrison on the cash grab (take as much money off the table every round as you can), it's a smart move. But I can't shake the feeling that this ever increasing mania will sweep up anything OSS with vague traction and this whole AI space that used to be religiously defaulted to open and sharing will fairly quickly end up in VC-funded fiefdoms with pay-to-play being all that's left modulo the "forever free" community hobbled versions. Hope I'm wrong.
itsaquicknote commented on Entrepreneurs who regret starting businesses   bbc.com/worklife/article/... · Posted by u/pmoriarty
marpstar · 3 years ago
A lost marriage isn't always for the worst.
itsaquicknote · 3 years ago
And sometimes it is.
itsaquicknote commented on Show HN: Unriddle – Create your own GPT-4 on top of any document   unriddle.ai/... · Posted by u/naveedjanmo
itsaquicknote · 3 years ago
Hey Naveed, this is a great project well done. I'm curious about the summarisation of longform content. You mention that you're using Langchain - are you using the MapReduce approach for documents that exceed the 32k context window, or some other approach? 600 pages at ~500 words/tokens a page would mean about $20 to mapreduce through a big doc, which seems crazy especially if you iterate on those 'summary' prompts. Or are you using embeddings for everything including summary prompts?
itsaquicknote commented on Do large language models need sensory grounding for meaning and understanding?   drive.google.com/file/d/1... · Posted by u/georgehill
_xnmw · 3 years ago
Quality of output does not mean that the process is genuine. A well-produced movie with good actors may depict a war better than footage of an actual war, but that is not evidence of an actual war happening. Statistical LLMs are trying really hard at "acting" to produce output that looks like there is genuine understanding, but there is no understanding going on, regardless of how good the output looks.
itsaquicknote · 3 years ago
This is Searle's Chinese Room posit though right? The argument that there's no abstraction or internal modelling going on. Wish I could find the post I read recently that demonstrated some fairly clear evidence though that there IS some level of internal abstraction/reasoning going on in LLMs.

Do we allow for a matter of degree, rather than a binary, of "zero" vs" "complete" understanding?

itsaquicknote commented on OpenAI’s policies hinder reproducible research on language models   aisnakeoil.substack.com/p... · Posted by u/randomwalker
dahart · 3 years ago
No matter how you define it, or whether people even agree companies should be obligated to provide certain public services, we are just nowhere near that line yet in this case, net even remotely close. It’s hand-wavy to say it’s important, but this is all brand new, there are only a handful of researchers involved, the critical mass to justify what you’re suggesting does not yet exist, it won’t for some time, and there’s no guarantee it ever will. I’m not sure what you mean by publicly funded trust, but that’s typically quite different from privately funded public services. Assuming that cost is even the reason here, then if someone wants to establish a trust and engage OpenAI, they can.

That said, what if OpenAI shut down codex because it has dangerous possibilities and amoral “researchers” started figuring out how to exploit them? What if it was fundamentally buggy or encouraging misleading research? What if codex was accidentally leaking or distributing export-controlled or other illegal (copyright, etc.) information? I’m explicitly speculating on possibilities, while you’re making unstated assumptions, so entertain the question of whether OpenAI is already doing a public service by shutting it down.

itsaquicknote · 3 years ago
Agree to disagree.
itsaquicknote commented on OpenAI’s policies hinder reproducible research on language models   aisnakeoil.substack.com/p... · Posted by u/randomwalker
itsaquicknote · 3 years ago
Research into systemically important infrastructure cannot be damned because that infrastructure isn't public. It's a cheap moralizing argument to say "pfff, this was predictable". Maybe so, but there isn't an alternative. Much like research on Twitter. Once these companies start to drift into providing what become broadscale social utilities and public services it doesn't matter that they're private. There are(/should be) obligations that come with that.

You can't handwave and say go do your research on some micro-niche open source project that's way behind the SOTA and has nowhere near the same reach. That's not what "best practice" means here.

itsaquicknote · 3 years ago
Replying to both responses because they're all good points. My argument boils down to the fact that some private companies end up becoming social utilities and once that happens, the rules (should) change as part of the social contract which means, yeah, they can't simply "pull the rug". The research is important precisely because its into systemically significant systems.

I get that it's difficult to define the line where that gets crossed. But the idea to provide a publicly funded trust that manages legacy versions of things like this is not a bad idea.

itsaquicknote commented on OpenAI’s policies hinder reproducible research on language models   aisnakeoil.substack.com/p... · Posted by u/randomwalker
z3c0 · 3 years ago
I thought I must be going crazy until I saw your comment. This sounds like a bad research practice that probably shouldn't be reproduced to begin with.
itsaquicknote · 3 years ago
Research into systemically important infrastructure cannot be damned because that infrastructure isn't public. It's a cheap moralizing argument to say "pfff, this was predictable". Maybe so, but there isn't an alternative. Much like research on Twitter. Once these companies start to drift into providing what become broadscale social utilities and public services it doesn't matter that they're private. There are(/should be) obligations that come with that.

You can't handwave and say go do your research on some micro-niche open source project that's way behind the SOTA and has nowhere near the same reach. That's not what "best practice" means here.

itsaquicknote commented on Transformers.js   xenova.github.io/transfor... · Posted by u/skilled
buryat · 3 years ago
your comments are very snarky

It would be great if we all try to keep the tone respectful and avoid snarkiness to maintain a constructive discussion

https://news.ycombinator.com/newsguidelines.html

itsaquicknote · 3 years ago
No they're not mate, it's just you. I've read the guidelines (thanks for helpfully linking them). I see this on HN, people infer offense and cite the book rather than engage.

By not highlighting what you found "snarky" your response is a definitional "shallow dismissal". I see you just "picked the most provocative thing to complain about". Not a lot of being "kind" either.

So you know what would also be great? If you held yourself to the standards you're keen to police around here.

itsaquicknote commented on GitHub Copilot X – Sign up for technical preview   github.blog/2023-03-22-gi... · Posted by u/todsacerdoti
itsaquicknote · 3 years ago
Ouch, this nukes a few startups I was watching working on "basically this". What's the plan control.dev and cursor.so?

u/itsaquicknote

KarmaCake day90March 16, 2023View Original