Readit News logoReadit News
planetpluta commented on Delta’s new AI-powered pricing strategy   blog.getjetback.com/delta... · Posted by u/bdev12345
dortlick · a month ago
Don't we already have individual pricing on lots of things like used cars, contractors doing work on your house, car insurance, etc? Most people that work in sales have some ability to adjust pricing based on their personal judgement of the customer(mark). Do you think there's not discrimination by a contractor when they come into your house and size you up and what they think you are willing to pay or maybe they just don't like your race/religion/smell. I'm just a little confused when it's so outrageous if airlines or amazon gives different prices to different people when it happens every time there is actual one on one price negotiation.
planetpluta · a month ago
I’d say two big differences are 1) human vs machine (especially when you get to the scale of something like Delta airlines) and 2) you have a lot more power in the negotiations you described! Basing it on 5 years of purchases and historical data isn’t a negotiation—it’s a “my way or the highway”
planetpluta commented on Delta’s new AI-powered pricing strategy   blog.getjetback.com/delta... · Posted by u/bdev12345
svachalek · a month ago
Historically, all prices are negotiated. We ended up with a culture of flat pricing for efficiency, with only a few high value items like cars being negotiated. You walk into a car dealer and spend hours to determine exactly how much they can get you to pay for the car you want. But now with advanced automation, it's worth the complexity to extract the maximum value out of each customer again, even on small purchases of a dollar or two.
planetpluta · a month ago
This is an interesting way to think about it. I would argue that flat pricing wasn’t for efficiency but “fairness”.

And also point out that AI driven price discrimination isn’t anywhere close to negotiated. You’re stuck with the price the machine gives you, with little to no recourse, short of rewriting your entire digital life!

planetpluta commented on Show HN: Workout.cool – Open-source fitness coaching platform   github.com/Snouzy/workout... · Posted by u/surgomat
Eric_WVGG · 2 months ago
Wow, this doesn't suck at all.

The thing that's missing for me is suggestions on how much to lift / how many reps. There's a fitness program called 100 Pushups that came up with a good solution for that…

- Repeat the exercise (in this case, a push-up) as many times as possible until failure. A person might achieve 8, for example.

- The app comes up with a schedule; every other day, the user is expected to do a set of 3, 4, 3, 3, 5 (with a 2-minute rest between each set)

- The app's schedule has an algorithm that ramps up the reps at a pace that the user can manage — and self-adjusts if the schedule is too easy or too hard…

- until the user can do 100 push-ups at the 6-week period.

If there's any interest in this, I'd be open to discussing a UI and contributing.

planetpluta · 2 months ago
Curious about the 100 pushups program app — do you have a particular one you like that you can share?

Edit: Followed the github issue and found the link!

planetpluta commented on Show HN: AirAP AirPlay server – AirPlay to an iOS Device   github.com/neon443/AirAP... · Posted by u/neon443
planetpluta · 3 months ago
A lot of apps allow you to AirPlay to multiple devices at once — would be neat to put this on a bunch of iphones to simultaneously play music
planetpluta commented on Show HN: A toy version of Wireshark (student project)   github.com/lixiasky/vanta... · Posted by u/lixiasky
andygcook · 3 months ago
Congratulations on the launch! FYI there is a pretty well-known YC startup named Vanta that helps companies manage various security compliance certifications.

Obviously, there are often different services that share the same name, but given that Vanta isn't an actual word in the English language, I would think this might be confusing for people.

As a data point of one, I just assumed Vanta (the company) was doing a Show HN today and was confused at first glance.

planetpluta · 3 months ago
> I just assumed Vanta (the company) was doing a Show HN today and was confused at first glance

Did the title of the post change? At first glance the Show HN is a toy wireshark program very far from any Trust Management and compliance

planetpluta commented on Google AI Ultra   blog.google/products/goog... · Posted by u/mfiguiere
Ancapistani · 3 months ago
I wonder if there's an opportunity here to abstract away these subscription costs and offer a consistent interface and experience?

For example - what if someone were to start a company around a fork of LiteLLM? https://litellm.ai/

LiteLLM, out of the box, lets you create a number of virtual API keys. Each key can be assigned to a user or a team, and can be granted access to one or more models (and their associated keys). Models are configured globally, but can have an arbitrary number of "real" and "virtual" keys.

Then you could sell access to a host of primary providers - OpenAI, Google, Anthropic, Groq, Grok, etc. - through a single API endpoint and key. Users could switch between them by changing a line in a config file or choosing a model from a dropdown, depending on their interface.

Assuming you're able to build a reasonable userbase, presumably you could then contract directly with providers for wholesale API usage. Pricing would be tricky, as part of your value prop would be abstracting away marginal costs, but I strongly suspect that very few people are actually consuming the full API quotas on these $200+ plans. Those that are are likely to be working directly with the providers to reduce both cost and latency, too.

The other value you could offer is consistency. Your engineering team's core mission would be providing a consistent wrapper for all of these models - translating between OpenAI-compatible, Llama-style, and Claude-style APIs on the fly.

Is there already a company doing this? If not, do you think this is a good or bad idea?

planetpluta · 3 months ago
I think the biggest hurdle would be complying with the TOS. Imagine that OpenAI etc would not be a fan of sharing quotas across individuals in this way
planetpluta commented on Are Americans' perceptions of the economy and crime broken?   niemanlab.org/2024/11/are... · Posted by u/bediger4000
karaterobot · 7 months ago
> But there also seems to be something more fundamental happening. Before the covid pandemic, consumer sentiment was relatively predictable based on economic fundamentals. The hard data and the survey responses tended to move up and down in something like unison. But since 2020, they’ve become disconnected, with a wide and pessimistic gap opening up between them. It’s hard to look at that phenomenon and see the impact of a changed media environment.

I don't understand that last sentence. I suspect I'm reading it wrong, but am having trouble parsing it in a way that doesn't mean: this data cannot be explained by a changing media environment since 2020. It's very easy for me to look at the disconnect between survey responses and economic data and see that how people receive and process news is largely responsible for it. The disconnect between facts and opinions has been widely observed, and while the pandemic didn't start it, it seems to have been an accelerator for it.

planetpluta · 7 months ago
Anecdotally, I completely agree that consumption of media has shifted since the pandemic and could reasonable explain this gap
planetpluta commented on Are Americans' perceptions of the economy and crime broken?   niemanlab.org/2024/11/are... · Posted by u/bediger4000
planetpluta · 7 months ago
In a sense, the macro trend is irrelevant and mainly used as a talking point by the media.

Individuals experience the world from an individual level — it is easy to go along with any trend that fits your desired narrative because until it is at odds with your individual experience, it doesn’t really matter.

(I’m being a bit reductive and haven’t fully fleshed out this thought, but think the sentiment is accurate)

planetpluta commented on How we made our AI code review bot stop leaving nitpicky comments   greptile.com/blog/make-ll... · Posted by u/dakshgupta
planetpluta · 8 months ago
> Essentially we needed to teach LLMs (which are paid by the token) to only generate a small number of high quality comments.

The solution of filtering after the comment is generated doesn’t seem to address the “paid by the token” piece.

planetpluta commented on Were RNNs all we needed?   arxiv.org/abs/2410.01201... · Posted by u/beefman
seanhunter · a year ago
We somehow want a network that is neuromorphic in structure but we don't want it to be like the brain and take 20 years or more to train?

Secondly how do we get to claim that a particular thing is neuromorphic when we have such a rudimentary understanding of how a biological brain works or how it generates things like a model of the world, understanding of self etc etc.

planetpluta · a year ago
Something to consider is that it really could take 20+ years to train like a brain. But once you’ve trained it, you can replicate at ~0 cost, unlike a brain.

u/planetpluta

KarmaCake day62August 10, 2022View Original