AT&T most year offers me a free!* iPhone pro every couple years now so it has actually gone way down.
Is the iPhone 17 supposed to be the bottom-of-the-line now, or the 16e?
Was there anything like that this year? It felt like the iPhone 17 Pro talk lasted 2 mins, and they spend 99% of their time just talking about the cameras. Although I only started watching parts of the event 52 mins in.
I understand that hardware has mainly reached a steady state, but have we also hit peaks of creativity from the software side, given that we have these amazing machines in our pockets?
Of course, no mention of anything AI, so Apple is either truly restraining themselves until they have something amazing, or they continue to slide into irrelevance and are missing the whole AI shift.
The new front camera sensor is now square. If you have more people in your selfie, the software will detect this and pick a wider aspect ratio for the cropped shot.
Not sure if Android has already been doing this, but this seems like a clever way to use the new hardware.
38:27 in the Apple Event video (https://www.apple.com/apple-events/).
"Science isn’t real - that’s terrible epistemology. It’s a process or method to generate and verify hypotheses and provisional knowledge, using replicable experiments and measurements. We don’t really know the real - we just have some current non-falsified theories and explanations that fit data decently, till we get better ones. The “science is real” crowd generally haven’t done much science and take it on faith."
Stripe had high variable costs (staff, COGS of pass-through processing fees) but low fixed costs. OpenAI has enormous fixed (pre-revenue!) costs alongside high variable costs (staff of AI engineers, inference).
Financially, OpenAI looks more like one of the EV startups like Tesla or Rivian than it does a company like Stripe. And where Stripe was competing with relatively stodgy financial institutions, OpenAI is competing with the very biggest, richest companies in the world.
Each client chooses how it represents html and css differently. And especially for things like dark mode, they can throw out your design entirely. So you generally stick with 20 year old design practices - lots of nesting tables - safe webfonts - flat designs etc. It's really hard to do any design work without an inbox preview tool like Litmus.
One specific feature that Figma really needs is an easier way to measure distances between elements. In email, you have to build whitespace using a lot of incongruous methods (line-height, breaks, cellpadding, etc). So the Figma padding information often doesn't work, and just having a simple way to draw a line between two elements and getting a measurement would be a real help.
The problem is that consumers don't behave like that. This is also why Amazon's Dash buttons failed. I always want to see a page with the product details and price before I click "buy". Reducing the number of clicks is not going to make me change my decision and suddenly order more things.
If they want to salvage Alexa, they need to forget shopping and start doubling down on the smart home and assistant experience. The tech is still pretty much where it was in 2014. Alexa can set timers and tell me the weather, and...that's basically it. Make it a value add in my life and I wouldn't mind paying a subscription fee for it.
I actually trust it much more than ChatGPT since it never hallucinates and it cites its source.
It would be a huge shame if it came to that. This (driverless tech) is amazing for what it can do to help people that are old/poor and live in places with no public transport (we have lots of little villages full of people like this with a bus once a day often going only one way). What is the most expensive conponent of proving public transport to such places? Drivers.