Readit News logoReadit News
strgcmc commented on Lessons from 14 years at Google   addyosmani.com/blog/21-le... · Posted by u/cdrnsf
erkanerol · 2 months ago
My favorite is the first one, "The best engineers are obsessed with solving user problems." and what I hate about it is that it is super hard to judge someone's skills about it without really working with him/her for a very long time. It is super easier said than done. And it is super hard to prove and sell when everybody is looking for easily assessable skills.
strgcmc · 2 months ago
This is why (flawed though the process may be in other ways), a company like Amazon asks "customer obsession" questions in engineering interviews. To gather data about whether the candidate appreciates this point about needing to understand user problems, and also what steps the candidate takes to try and learn the users' POV or walk a mile in their shoes so to speak.

Of course interview processes can be gamed, and signal to noise ratio deserves skepticism, so nothing is perfect, but the core principle of WHY that exists as part of the interview process (at Amazon and many many other companies too) is exactly for the same reason you say it's your "favorite".

Also IIRC, there was some internal research done in the late 2010s or so, that out of the hiring assessment data gathered across thousands of interviews, the single best predictor of positive on-the-job performance for software engineers, was NOT how well candidates did on coding rounds or system design but rather how well they did at the Customer Obsession round.

strgcmc commented on Reverse engineering a $1B Legal AI tool exposed 100k+ confidential files   alexschapiro.com/security... · Posted by u/bearsyankees
mattfrommars · 3 months ago
This might be off topic since we are in topic of AI tool and on HackerNews.

I've been pondering a long time how does one build a startup company in domain they are not familiar with but ... Just have this urge to 'crave a pie' in this space. For the longest time, I had this dream of starting or building a 'AI Legal Tech Company' -- big issue is, I don't work in legal space at all. I did some cold reach on lawfirm related forums which did not take any traction.

I later searched around and came across the term, 'case management software'. From what I know, this is what Cilo fundamentally is and make millions if not billion.

This was close to two years or 1.5 years ago and since then, I stopped thinking about it because of this understanding or belief I have, "how can I do a startup in legal when I don't work in this domain" But when I look around, I have seen people who start companies in totally unrelated industry. From starting a 'dental tech's company to, if I'm not mistaken, the founder of hugging face doesn't seem to have PHD in AI/ML and yet founded HuggingFace.

Given all said, how does one start a company in unrelated domain? Say I want to start another case management system or attempt to clone FileVine, do I first read up what case management software is or do I cold reach to potential lawfirm who would partner up to built a SAAS from scratch? Other school of thought goes like, "find customer before you have a product to validate what you want to build", how does this realistically work?

Apologies for the scattered thoughts...

strgcmc · 3 months ago
I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).

I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.

Reference: https://www.thetimes.com/sport/formula-one/article/professor...

Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).

strgcmc commented on Addiction Markets   thebignewsletter.com/p/ad... · Posted by u/toomuchtodo
strgcmc · 4 months ago
I got curious and validated your source [1], to pull the exact quote:

"The proportion of Connecticut gambling revenue from the 1.8% of people with gambling problems ranges from 12.4% for lottery products to 51.0% for sports betting, and is 21.5% for all legalized gambling."

Without going into details, I do have some ability to check if these numbers actually "make sense" against real operator data. Will try to sense-check if the data I have access to, roughly aligns with this or not.

- the "1.8% of people" being problem gamblers does seem roughly correct, per my own experience

- but those same 1.8% being responsible for 51% of sportsbook revenue, does not align with my intuition (which could be wrong! hence why I want to check further...)

- it is absolutely true that sportsbooks have whales/VIPs/whatever-you-call-them, and the general business model is indeed one of those shapes where <10% of the customers account for >50% of the revenue (using very round imprecise numbers), but I still don't think you can attribute 51% to purely the "problem gamblers" (unless you're using a non-standard definition of problem-gambler maybe?)

strgcmc · 4 months ago
I'm sure nobody cares, but the data I can check shows a couple interesting observations (won't call them conclusions, that's too strong):

- Yes, you can find certain slices of 1.8% of customers, that would represent 50%+ of revenue... But this is usually pretty close to simply listing out the top 1.8% of all accounts by spend

- Therefore, to support the original claim, one would essentially have to definitionally accept that nearly all of the top revenue accounts are "problem gamblers" and almost no one else is... But this doesn't pass a basic smell test, because population wise there are more "poor" problem-gamblers than there are "rich" ones, because there are a lot more poor people in general than rich ones, so it's very unlikely that nearly all of the 1.8% of total population problem gamblers also happen to overlap so much with the top 1.8% customer accounts by revenue.

strgcmc commented on Addiction Markets   thebignewsletter.com/p/ad... · Posted by u/toomuchtodo
matthewdgreen · 4 months ago
These services make a relatively smaller piece of their profit from "responsible" people with a lot of self-control. In many cases, the business is probably not viable without problem gamblers. Problem gamblers account for anywhere from 51% of revenue for sports betting apps, to 90% in the case of casinos [1,2] and the numbers seem to be getting worse.

[1] https://portal.ct.gov/-/media/DMHAS/Publications/2023-CT-FIN... [2] https://www.umass.edu/seigma/media/583/download

strgcmc · 4 months ago
I got curious and validated your source [1], to pull the exact quote:

"The proportion of Connecticut gambling revenue from the 1.8% of people with gambling problems ranges from 12.4% for lottery products to 51.0% for sports betting, and is 21.5% for all legalized gambling."

Without going into details, I do have some ability to check if these numbers actually "make sense" against real operator data. Will try to sense-check if the data I have access to, roughly aligns with this or not.

- the "1.8% of people" being problem gamblers does seem roughly correct, per my own experience

- but those same 1.8% being responsible for 51% of sportsbook revenue, does not align with my intuition (which could be wrong! hence why I want to check further...)

- it is absolutely true that sportsbooks have whales/VIPs/whatever-you-call-them, and the general business model is indeed one of those shapes where <10% of the customers account for >50% of the revenue (using very round imprecise numbers), but I still don't think you can attribute 51% to purely the "problem gamblers" (unless you're using a non-standard definition of problem-gambler maybe?)

strgcmc commented on How to build silos and decrease collaboration on purpose   rubick.com/how-to-build-s... · Posted by u/gpi
yesfitz · 4 months ago
Previous discussion: https://news.ycombinator.com/item?id=28411712

I was originally critical of this piece because I work in a highly regulated industry (and silos can lead to compliance issues), but after a little more consideration, I realized I see re-siloing happening organically due to lines of business competing for scarce resources.

For example:

Because everyone needs Team X to complete their projects, Team X's leadership has to decide how to allocate their time. This rarely makes the lines of business feel like their needs are fulfilled. So different lines of business start building back channels to members of Team X, in a bid to get an advocate for their projects.

Eventually, a line of business might hire a Business Analyst just to deal with Team X.

I guess an alternative would be to embed a member of Team X with the line of business. They're dedicated, but could also be reassigned as needed.

strgcmc · 4 months ago
In such scenarios (data engineering / DS / analytics is my personal background), I have learned not to underestimate the value of, explicitly declaring within Team X, that person X1 is dedicated to line L1, person X2 is dedicated to line L2, etc. (aka similar to your last line about embedding a person with that line of business).

In theory, it doesn't actually "change" anything, because Team X is still stuck supporting exactly the same number of dependencies + the same volume and types of requests.

But the benefit of explicit >>> implicit, the clarity/certainty of knowing who-to-go-to-for-what, the avoidance of context switching + the ability to develop expertise/comfort in a particular domain (as opposed to the team trying to uphold a fantasy of fungibility or that anyone can take up any piece of work at any time...), and also the specificity by which you can eventually say, "hey I need to hire more people on Team X, because you need my team for 4 projects but I only have 3 people..." -- all of that has turned out to be surprisingly valuable.

Another way to say it is -- for Team X to be stretched like that initial state, is probably dysfunctional, and in a terminally-fatal sense, but it's a slow kind of decay/death. Rather than pretending it can work, pretending you can virtualize the work across people (as if people were hyper-threads in a CPU core, effortlessly switching tasks)... instead by making it discrete/concrete/explicit, by nominating who-is-going-to-work-on-what-for-who, I have learned that this is actually a form of escalation, of forcing the dysfunction to the surface, and forcing the organization to confront a sink-or-swim moment sooner than it otherwise would have (vs if you just kept limping on, kept trying to pretend you can stay on top of the muddled mess of requests that keep coming in, and you're just stuck treading water and drowning slowly).

---

Of course, taking an accelerationist stance is itself risky, and those risks need to be managed. But for example, if the reaction to such a plan is something like, "okay, you've created clarity, but what happens if person X1 goes on vacation/gets-hit-by-bus, then L1 will get no support, right?"... That is the entire purpose/benefit of escalating/accelerating!

In other words, Team X always had problems, but they were hidden beneath a layer of obfuscation due to the way work was being spread around implicitly... it's actually a huge improvement, if you've transformed a murky/unnameable problem into something as crispy and quantifiable as a bus-factor=1 problem (which almost everyone understands more easily/intuitively).

---

Maybe someday Team X could turn itself into a self-service platform, or a "X-as-a-service" offering, where the dependent teams do not need to have you work with or for them, but rather just consume your outputs, your service(s)/product(s), etc. at arms-length. So you probably don't always want to stay in this embedded or explicit "allocation" model.

strgcmc commented on The pivot   antipope.org/charlie/blog... · Posted by u/AndrewDucker
SilverSlash · 5 months ago
The human 'benevolence factor' has gone up throughout history as we've advanced and become more civilized. If AI is even more advanced than us then why is it naive to assume it will be more benevolent than us?
strgcmc · 5 months ago
The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we?

Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us.

strgcmc commented on What if I don't want videos of my hobby time available to the world?   neilzone.co.uk/2025/09/wh... · Posted by u/speckx
chrischen · 5 months ago
I think you can make arguments for and against the fundamental right to record or to not be recorded.

If someone is doing something bad/illegal, do we have a right to record/document it? If I am outside minding my own business and not doing anything bad, do I have a right to not be recorded?

What is the difference between seeing and recalling something that happened vs recording? What happens when technology blurs the difference (for example if we all start wearing and using camera AR glasses)?

strgcmc · 5 months ago
A purely technology-minded compromise to this question (aka how to support both the "good" and "bad" kinds of recording), is probably something along the lines of expiry and enforcing a lack of permanence as the default (kind of like, the digital age recording-centric version of "innocent until proven guilty", which honestly is one of the greatest inventions in the history of human legal systems). Of course, one should never make societal decisions purely from a technological practicality standpoint.

Since you can't be sure what is "bad"/illegal, and people will just record many things anyways without thinking too much about it --> then the default should be auto-expiring/auto-deletion after X hours/days, unless some reason or some confirmation is provided to justify its persistence.

For example, imagine we lived in a near-future where AI assistants were commonplace. Imagine that recording was ubiquitous but legally mandated to default into being "disappearing videos" like Snapchat, but for all the major platforms (YouTube, TikTok, X, Twitch, Kick, etc.). Imagine that every day, you as a regular person doing regular things, get maybe 10000 notifications of, "you have been recorded in video X on platform Y, do you consent for this to be persisted?", and also law enforcement has to go through a judge (kind of like a search warrant) to file things like "persistence warrants", and then maybe there is another channel/method for concerned citizens who want to persist video of a "bad guy" doing "bad things" where they can request for persistence (maybe it's like an injunction against auto-deletion until a review body can look at the request)... Obviously this would be a ton of administrative overhead, a ton of micro-decisions to be made -- which is why I mentioned the AI-assistant angle, because then I can tell my personal AI helper, "here are my preferences, here is when I consent to recording and here is when I don't... knowing my personal rules, please go and deal with the 10000 notifications I get every day, thanks". Of course if there's disagreement or lack of consensus, some rules have to be developed about how to combine different parties wishes together (e.g. take a recording of a child's soccer game, where maybe 8 parents consent and 3 parents don't to persistence... perhaps it's majority rule so persistence side wins, but then majority has to pay the cost of API tokens to a blurring/anonymization service that protects the 3 who didn't want to be persisted -- that could be a framework for handling disputed outcomes?)

I'm also purposefully ignoring the edge-case problem of, what if a bad actor wants to persist the videos anyways, but in short I think the best we can do is impose some civil legal penalties if an unwilling participant later finds out you kept their videos without permission.

Anyways, I know that's all super fanciful and unrealistic in many ways, but I think that's a compromise sort of world-building I can imagine, that retains some familiar elements of how people think about consent and legal processes, while acknowledging the reality that recording is ubiquitous and that we need sane defaults + follow-up processes to review or adjudicate disputes later (and disputes might arise for trivial things, or serious criminal matters -- a criminal won't consent to their recording being persisted, but then society needs a sane way to override that, which is what judges and warrants are meant to do in protecting rights by requiring a bar of justification to be cleared).

strgcmc commented on You did this with an AI and you do not understand what you're doing here   hackerone.com/reports/334... · Posted by u/redbell
simulator5g · 6 months ago
This is a really strange reply.
strgcmc · 5 months ago
I think they just read my first sentence and decided to take offense immediately. Shrug.

All I meant was, I didn't want to go down a path of talking about Trump... that's a very very dead horse to beat. I thought there were interesting elements to this person's ideas that were worth further discussion, that could be divorced/split-off from the Trump lightning rod, so I tried to do that. I generally thought I agreed with their original ideas, and wanted to build on them or respond to them, without getting sucked into wasting breath on Trump (nobody benefits, regardless if you have left or right leaning views).

I'm sure I could fix some gaps in the way I explained myself, but oh well, just another day on the internet.

strgcmc commented on Open Social   overreacted.io/open-socia... · Posted by u/knowtheory
ethbr1 · 5 months ago
Psychological manipulation is only being performed because it generates dollars.
strgcmc · 5 months ago
True of course that dollars is the end goal, but frankly it'd be better if they just took the dollars out of my pocket directly, instead of poisoning my brain first so that they can trick me into giving some dollars...

Obviously I'm being hyperbolic, but I think eventually if society survives past this phase, our descendants will look back and judge us for letting psychological manipulation be a valid economic process as a way to generate dollars, in much the same way we might judge our ancestors for ever building up a whole industry to hunt whales for oil for fuel (meaning, they might acknowledge that fuel is important and necessary to power an industrializing society, but they would mock us for not understanding how to refine petroleum sooner, and how silly going through the tech tree of fucking whale hunting is, just to get some fuel).

It is fucking silly/absurd/dangerous, that we go through the tech tree branch of psychological manipulation, just to be able to sell some ads or whatever.

u/strgcmc

KarmaCake day734May 16, 2017View Original