I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all.
Usually the subtext is that they’ve created a promotion process which is onerous, they are under paying you market rate, and/or management doesn’t want to make hard decisions or uphold their side of the bargain. The last thing I want to do as an IC after I’ve crawled through glass for several years to accomplish something Herculean is draft documents that defend my accomplishments from bureaucrats.
When I hear comments like that, I raise my eyebrows because it’s almost tempting me to apply elsewhere. I don’t know why, when presented with mountains of evidence, managers can’t just make a reasoned judgment call and grant promotions when appropriate, unassisted.
Any recommendations for acquiring the place website URL through an API or ethically scraping it at scale? I’m specifically wondering about options that wouldn’t involve Google Places.
First, didn't realize this was Rand Fishkin writing it. He knows his stuff. He also has another linked article in there that is more prescriptive on incrementality measurement being all that matters and how to think about it at a high level and I completely agree it is all the matters.
https://sparktoro.com/blog/how-to-measure-hard-to-measure-ma...
That in turn links to another respected analytics leader, Avinash Kaushik:
https://www.kaushik.net/avinash/marketing-analytics-attribut...
That is also a must read.
The bottom line is what used to work no longer does, and marketers (and finance and leadership) need to get used to having less fidelity and availability than they were used to. It also means marketing teams who are thrashy trying a new low impact tactic every week instead of constructing experiments likely to deliver statistically significant results on incremental lift are going to be spinning their wheels and wasting dollars. And that's something to watch out for.
Unfortunately, setting up proper experiments, controlling for bias, getting clean data, etc. are material challenges that require skills, scale (to get a read on bottom of funnel especially), and resources/budget.
There aren't great options out there for that now if you're a smaller or mid size company that I'm aware of, though if anyone is aware of them I'd love to have them on my radar.
Clip embeddings can absolutely “read” text if the text is large enough. Tiling enables the model to read small text.
I've seen entire teams burn so much money by overcomplicating projects. Bikesheding about how to implement DDD, Hexagonal Architecture, design patterns, complex queues that would maybe one day be required if the company scaled 1000x, unnecessary eventual consistency that required so much machinery and man hours to keep data integrity under control. Some of these projects were so late in their deadlines that had to be cancelled.
And then I've seen one man projects copy pasting spaghetti code around like there's no tomorrow that had a working system within 1/10th of the budget.
Now I admire those who can just produce value without worrying too much about what's under the hood. Very important mindset for most startups. And a very humbling realization.
But at least in science roles it hasn’t happened yet. Rather, I keep seeing instances of bogus scientific conclusions which waste money for years before they are corrected.
Being systematic, reproducible, and thorough is difficult, but it’s the only way to do science.
What do they mean by this? Isn't this roughly what GPT-4 Vision and LLaVA do?