Readit News logoReadit News
ripbasarur · a year ago
Their track record based on Humane AI does not exactly give the confidence on building an AI search you can trust. Also what a pivot from hardware to fact-checking?
debacle · a year ago
Fact checking is not objective, you just need to convince someone that your facts jive with their perception.
sqeaky · a year ago
There are better and worse ways to do fact checking. Appealing to evidence is more objective than appealing to vibes or appealing to baseless authority.

Once solid evidence is in sight then are usually some clean objective takes to be had and some fairly obvious or likely subjective takes that a reasonable viewer might have.

The real problems happen when one side objects to what counts as evidence at all or one side puts forward low quality evidence and claims it is beyond reproach.

mike_hearn · a year ago
It can be and they want to focus only on objective questions:

> To use an example nearer to my heart, say you want to compare how many Apple and Samsung devices were sold in the past five years. The service would locate and collate that information.

My question would be to what extent is this really a problem. Sounds like they're going to be competing with Google and Bloomberg which is a tough sell. Finding data quickly is their bread and butter.

Dead Comment

tjpnz · a year ago
Two reasons commonly cited:

- The perception that founders with big tech "credentials" stand a better chance of success than those without. The founders of Humane were ex-Apple.

- The idea that founders will generally be more successful on their second try, even if their first ended up being a complete trash fire - see Adam Neumann's new venture. This one sounds more reasonable on the surface but I've got no data to back it.

vinni2 · a year ago
ex-Apple doesn’t really give much credibility either. For a hardware startup yes but for a AI fact-checking startup? unsure.
beezlebroxxxxxx · a year ago
Execs like these fail upwards by chasing funding that they can incinerate in bonfires of "innovation" and "disruption."

For whatever reason, the failure of their tenure at past companies (or the failure of their past companies outright) are usually not held against them.

But wait, these execs need insane compensation because, conveniently, "they have the special skills needed to be an exec" [see my first sentence].

SV and the tech world has this idea that many execs and founders are something like Steve Jobs. They love to promote that image. Actually, from my experience, they're more like Elizabeth Holmes or your random MBA/MBB alum psychopath failing upwards.

dmitrygr · a year ago
I am not sure that ex-Apple is really worth anything here. Apple is a kingdom. Unless you are Tim or one of his close team, your opinion on product does not matter, you will do what you are told. So, unless you are one of those few people, "ex-Apple" means that you are very good at executing on things you were clearly told to do. I would gladly hire any ex-Apple engineer and be sure to get a great hire! But hiring ex-Apple product people...
spywaregorilla · a year ago
> This one sounds more reasonable

yeah, for example, purchasing a magazine about surfing

hangonhn · a year ago
I don't know why the title is written the way it is. It's slightly different from the title on the article -- "Humane execs leave company to found AI fact-checking startup".

The current link title "AI Humane execs leave company to found AI fact-checking startup" reads like an AI running the company Humane left to start its own AI powered fact-checking startup. It's very Kafkaesque and funny but maybe a bit inaccurate.

aniviacat · a year ago
I find the title of the article to be confusing/ambiguous. If you're not familiar with the company, "humane" could easily be read as an adjective here.
pseudopersonal · a year ago
How do egregiously failed execs continue to get funding and high profile gigs? It's not just with Humane. I've seen this with the Better.com founder, his second failure after UNCLE, and with execs from MSFT Xbox. I can't for the life of me understand how they continue to get opportunities.

And how do I get in on it?

goalonetwo · a year ago
The skills required to become an exec are a lot of pitching yourself and networking to the right person to get your next job. Those skills are only slightly correlated with how good you actually are as an exec.

Add to this that most companies don't like to take risks and will only hire an exec that has already be an exec somewhere else. To some level, once you make it through the exec glass ceiling, you are almost always guaranteed to always be an exec somewhere.

You see this all over the place by the way. I have seen the same thing for Directors, Principal engineers etc. Those people were not always that good at their job to justify their positions. But they for sure knew how to market themselves and interview well.

fsckboy · a year ago
>The skills required to become an exec are a lot of pitching yourself and networking to the right person to get your next job. Those skills are only slightly correlated with how good you actually are as an exec.

source?

large corporations harnessing and deploying massive resources have led to air travel, skyscrapers, cell phones, MRI machines, medicines, the green revolution, etc. all things that require undertaking risk, but of course effectively managing it. Your claim is that skills don't matter to claim leadership positions in such organizations, only self promotion. OK, let's say that's true: the system sure seems to be working. And of course, people who think they have a better way are free to pursue that avenue... maybe some of the most successful companies in the world have pursued just such avenues.

mgh2 · a year ago
Hypothesis: most VCs have a hive mentality and would prefer betting on low risk founders through their network of "highly accomplished" individuals: usually white males - false aura of achievement, survivorship and in-group bias, Matthew effect.
munificent · a year ago
Sounds about right to me.

In-group membership tends to be sticky. People can stay in a tribe even when their qualifications in someone outside of the tribe would be insufficient to let them in.

I imagine that's partially driven by a feeling that one's previous qualifications lend some predictive power that they will earn their place again in the future even if right now they're foundering. Also, I suspect a lot of it is that people don't want to be in groups where it's too easy to get kicked out because it makes themselves feel to tenuous. So the kinds of in-groups that stick around are the ones that are a little more generous to their members once they get in.

mike_hearn · a year ago
They haven't raised much funding yet:

> Infactory has thus raised a pre-seed, though its founders declined to confirm the amount or investors. Seed funding will be a focus for the next “six to 18 months,” per Hartley Moy.

A pre-seed round with anonymous investors doesn't mean much. It can be as simple as friends or even themselves.

torlok · a year ago
People are looking to hire someone with experience, and the CEO club is relatively small.
ypeterholmes · a year ago
I'm curious- what's their strategy for determining what's true? Is it an Orwellian setup where certain media organizations (eg. NYT, Reuters, Wikipedia) are deemed to be an authoritative ministry of truth?
torlok · a year ago
Does it matter what their strategy is? This is vaporware. People consume information through search engines and platforms. If there's a market for fact-checking AI, it'll probably be developed in-house, since all the big companies have in-house tech. The most they can hope for is to grab as large of a bag as they can before this bubble pops, or hope they'll get bought out in a few years.
mike_hearn · a year ago
The article explains that:

> Infactory will pull information directly from trusted resources

Of course it means assuming anything the NYT says is true. Biden is sharp as a tack, it's a fact!

Nah, actually the article says they'll avoid politics. They want to be a better Bloomberg Terminal apparently, and only focus on quantitative data for business purposes. Basically OurWorldInData + LLMs.

In theory you can actually do a reasonable job of this sans Orwell. You train a model on a really wide selection of sources and then get it to spit out the knowledge that doesn't seem to have any disagreement within the dataset. The assumption is that no disagreement = fact. This heuristic isn't bad, but it's been tried before and is prone to errors in a few well known cases. Google tried it years ago, predating LLMs. It worked well for things like "how high is the Eiffel tower" but unsurprisingly one place it worked poorly is political ideology and terminology. Different political tribes often have their own ways of using language.

Example: "is George Bush a war criminal"? Turns out that the internet is full of documents asserting this to be true, and not many asserting it to be false. This isn't because it's a widely agreed fact. It's because the left believe the concept of war criminal makes sense and can be applied liberally, but the right doesn't. Presented with this statement the right tend to say, a criminal according to which court and which government? The left say according to international law which is again, a concept the right doesn't really recognize as being legitimate to begin with because they think that law inherently flows from the concept of a nation state or empire, not a group of NGOs.

At heart the "problem", if you want to call it that, is that the left is generally more passionate about politics and power so if they believe in a concept they do things like take poorly paid journalism jobs and write lots of articles that take for granted the legitimacy of their ideological precepts. The right do things like go work in banking or agriculture or oil, or indeed the tech industry, and don't end up with much time to spend arguing with them. So these concepts filter into the dataset without pushback. Deploy them in real world debate though, and suddenly you get that pushback.

(this seems to be one of the reasons that a naively trained LLM ends up super woke - the internet is just left biased due to the greater output of words from that tribe)

Fortunately there's a limited number of cases like this. The set of such cases does grow over time, but at a relatively slow rate. In theory, if you had people really and truly committed to neutrality and pursuit of truth, you could use LLMs to find claims that are both lacking in disagreement and also non-dependent on ideological disputed concepts. LLMs are actually pretty good at the sort of vagueness and nuance that understanding requires.

The problem is that such a program would be very boring and not commercially useful. Ironically, the very concept of fact checking is itself an "is George Bush a war criminal" type problem. The right take it for granted that reality is complex and depends on perspective, the left take for granted that reality is simple and can be painted in black/white, correct/incorrect. So the right doesn't spend much time on "fact checking" as a concept, because they see it as a quasi-illegitimate endeavor to begin with. From their POV there isn't actually much disagreement on things that are genuinely empirical facts like the speed of light or the price of USD:GBP yesterday at noon, so what fact checkers end up spending time on is in reality a sort of political censorship / propaganda operation aimed at shutting down any viewpoint they don't like.

So an AI dedicated to genuine checking of empirical facts would probably find that there isn't much to do. People are pretty good at agreeing on facts already, there are not that many errors to fix (and the errors that do sneak through are rarely important). An AI dedicated to the sort of fact checking that Snopes engages in would be very busy indeed, but there are plenty of people willing to work for peanuts to engage in ideological warfare against the right so where's their commercial edge? Seems like another business failure waiting to happen.

ypeterholmes · a year ago
"no disagreement = fact" What? This is objectively wrong.

"the left is generally more passionate about politics and power" What? This is objectively wrong.

"People are pretty good at agreeing on facts already" What? This is objectively wrong.

alain94040 · a year ago
Ken Kocienda is known for writing the iPhone software keyboard, an experience he wrote a book about that is well worth a read.
frizlab · a year ago
These guys have way too much money…
twojobsoneboss · a year ago
Correction: These _investors_ have too much money

Actually it’s a step further: these _LPs_ have too much money

fsckboy · a year ago
in terms of AI's making us smarter, so far, AI-such-as-we-know-it provides a greater advantage (labor saving) for traditionally smart people (who are less likely to be led astray by bullshit) than it does as a substitute for the being actually smart part.

I love using it to write code or answer questions better than simple websearches, but man it produces some nonsense just as do websearches.

Ragnarork · a year ago
> For another, it’s next to impossible to launch a startup in 2024 without some upfront AI pitch

Nitpicking on one specific sentence, but reading that feels so dumb...

ryandrake · a year ago
If you remember their original teaser videos years ago and the writing around their product when they were in Stealth Mode, it had nothing to do with AI. The sales pitch was that it was the next iteration of an iDevice-like device with a unique form factor and method of interaction.

Only at the very end did they blast "AI AI AI AI" all over the marketing and rename it to "AI Pin" and have everything AI in AI their AI announcements AI talk AI about AI. It was pretty transparent what they were up to.

__loam · a year ago
This was true like 8 months ago but investors are getting wise to it.
tjpnz · a year ago
It's what got us that awful pin.
namaria · a year ago
They went full waterfall. Never go full waterfall. /jk

I'm aware that the original waterfall paper actually calls for frequent iteration. It reads a lot like the Agile Manifesto actually.