Readit News logoReadit News
edwin commented on How Generative Engine Optimization (GEO) rewrites the rules of search   a16z.com/geo-over-seo/... · Posted by u/eutropheon
kurtoid · 3 months ago
SEO and it's related fields are a net-negative for the Internet (and maybe humanity in general)
edwin · 3 months ago
Unlike classic search, which got worse over time due to SEO gamings, AI search might actually improve with scale. If LLMs are trained on real internet discussions (Reddit, forums, reviews), and your product consistently gets called out as bad, the model will eventually reflect that. The pressure shifts from optimizing content to improving the product itself.
edwin commented on How Generative Engine Optimization (GEO) rewrites the rules of search   a16z.com/geo-over-seo/... · Posted by u/eutropheon
maltelandwehr · 3 months ago
The 37M/day is an estimate from Rand Fishkin that often gets quoted as gospel. It is based on limited external data. OpenAI mentioned 1B/day - and had significant overall growth in usage since then.

Also, 1 search on ChatGPT easily replaces 5-10 searches on Google.

Many B2B SaaS companies already get the same amount of leads from ChatGPT that they get from Google. Because clicks from ChatGPT are better informed and have a significantly higher conversion rate. I am talking up to +700% CVR vs traffic from Google for some companies.

edwin · 3 months ago
We looked at the same data that Rand Fishkin used and definitely came to a different conclusion.
edwin commented on How Generative Engine Optimization (GEO) rewrites the rules of search   a16z.com/geo-over-seo/... · Posted by u/eutropheon
edwin · 3 months ago
A few take-aways from a study we ran (~800 consumer queries, repeated over a few days):

* AI answers shift a lot. In classic search a page-1 spot can linger for weeks; in our runs, the AI result set often changed overnight.

* Google’s new “AI Mode” and ChatGPT gave the same top recommendation only ~47 % of the time on identical queries.

* ChatGPT isn’t even consistent with itself. Results differ sharply depending on whether it falls back to live retrieval or sticks to its training data.

* When it does retrieve, ChatGPT leans heavily on publications it has relationships with (NYPost and People.com for product recs) instead of sites like rtings.com

Writeup: https://amplifying.ai/blog/why-ai-product-recommendations-ke...

Data: https://amplifying.ai/research/consumer-products

edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
stevenhuang · 2 years ago
apparently pandoc also supports pptx

so you can tell GPT4 to output markdown, then use pandoc to convert that markdown to pptx or pdf.

edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
zyang · 2 years ago
Is it possible to fine-tune with custom data to output JSON?
edwin · 2 years ago
That's not the current OpenAI recipe. Their expectation is that your custom data will be retrieved via a function/plugin and then be subsequently processed by a chat model.

Only the older completion models (davinci, curie, babbage, ada) are avaialble for fine-tuning.

edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
daralthus · 2 years ago
What's semantic caching?
edwin · 2 years ago
With LLMs, the inputs are highly variable so exact match caching is generally less useful. Semantic caching groups similar inputs and returns relevant results accordingly. So {"dish":"spaghetti bolognese"} and {"dish":"spaghetti with meat sauce"} could return the same cached result.
edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
adultSwim · 2 years ago
Running an LLM every time someone clicks on a button is expensive and slow in production, but probably still ~10x cheaper to produce than code.
edwin · 2 years ago
New techniques like semantic caching will help. This is the modern era's version of building a performant social graph.
edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
yonom · 2 years ago
This is cool! Are you using one-shot learning under the hood with a user provided example?
edwin · 2 years ago
BTW: Here's a more performant version (fewer tokens) https://preview.promptjoy.com/apis/jNqCA2 that uses a smaller example but will still generate pretty good results.
edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
yonom · 2 years ago
This is cool! Are you using one-shot learning under the hood with a user provided example?
edwin · 2 years ago
Thanks. We find few-shot learning to be more effective overall. So we are generating additional examples from the provided example.
edwin commented on Native JSON Output from GPT-4   yonom.substack.com/p/nati... · Posted by u/yonom
edwin · 2 years ago
For those who want to test out the LLM as API idea, we are building a turnkey prompt to API product. Here's Simon's recipe maker deployed in a minute: https://preview.promptjoy.com/apis/1AgCy9 . Public preview to make and test your own API: https://preview.promptjoy.com

u/edwin

KarmaCake day41February 5, 2008
About
urban chicken farmer; data + model wrangler

https://twitter.com/edwin

View Original