Readit News logoReadit News
madmax96 commented on Apple and Amazon will miss AI like Intel missed mobile   gmays.com/the-biggest-bet... · Posted by u/gmays
ericmay · 7 days ago
> But now there’s a new paradigm shift. The iPhone was perfect for the mobile era, which is why it hasn’t changed much over the last decade.

> AI unlocks what seems to be the future: dynamic, context-dependent generative UIs or something similar. Why couldn’t my watch and glasses be everything I need?

  https://www.apple.com/watch/

  https://www.apple.com/apple-vision-pro/

> The other problem is that at its core, AI is two things: 1) software and 2) extremely fast-moving/evolving, two things Apple is bad at.

Idk my MacBook Pro is pretty great and runs well. Fast moving here implies that as soon as you release something there's like this big paradigm shift or change that means you need to move even faster to catch up, but I don't think that's the case, and where it is the case the new software (LLM) still need to be distributed to end users and devices so for a company like Apple they pay money and build functionality to be the distributor of the latest models and it doesn't really matter how fast they're created. Apple's real threat is a category shift in devices, which AI may or may not necessarily be part of.

I'm less certain about Amazon but unless (insert AI company) wants to take on all the business risk of hosting governments and corporations and hospitals on a cloud platform I think Amazon can just publish their own models, buy someone else's, or integrate with multiple leading AI model publishers.

madmax96 · 7 days ago
Came here to say pretty much this. Hardware seems more valuable than a model.

I think AI could be commoditized. Look at DeepSeek stealing OpenAI's model. Look at the competitive performance between Claude, ChatGPT, Grok, and Gemini. Look at open weight models, like Llama.

Commoditized AI need used via a device. The post argues that other devices, like watches or smart glasses, could be better posed to use AI. But...your point stands. Given Apple's success with hardware, I wouldn't bet against them making competitive wearables.

Hardware is hard. It's expensive to get wrong. It seems like a hardware company would be better positioned to build hardware than an AI company. Especially when you can steal the AI company's model.

Supply chains, battery optimization, etc. are all hard-won battles. But AI companies have had their models stolen in months.

If OpenAI really believed models would remain differentiated then why venture into hardware at all?

madmax96 commented on Ask HN: Imagine a world with 1Tb/s internet. What would change?    · Posted by u/dinobones
janice1999 · a year ago
AR/VR with low latency could allow for some interesting multiplayer experiences.

However there is also the downside of making high fidelity omnipresent surveillance easier.

madmax96 · a year ago
Is bandwidth the limiting factor or is it ping?
madmax96 commented on Brew-Nix: a flake automatically packaging all homebrew casks   discourse.nixos.org/t/bre... · Posted by u/KolenCh
viraptor · a year ago
In practice, it's not that restricting. It's pretty rare to find something actually requiring a more recent version. (And there's slow progress in updating it)
madmax96 · a year ago
Granted, it’s not restrictive if you only want to use Nix for general utilities and Unix libraries. But it’s extremely restricting if you want to use Nix to manage macOS apps. And I love Nix, so of course I want to do that :)

Thanks for posting the link to sponsor this work.

madmax96 commented on Brew-Nix: a flake automatically packaging all homebrew casks   discourse.nixos.org/t/bre... · Posted by u/KolenCh
madmax96 · a year ago
I love nix on macOS. But one word of caution: nix uses a very outdated, EOL’d, macOS SDK (https://github.com/NixOS/nixpkgs/issues/101229).
madmax96 commented on C++ Safety, in Context   herbsutter.com/2024/03/11... · Posted by u/ingve
madmax96 · a year ago
This post contains a number of statements that mislead the reader into believing that we are overreacting to memory safety bugs. For example:

> An oft-quoted number is that “70%” of programming language-caused CVEs (reported security vulnerabilities) in C and C++ code are due to language safety problems... That 70% is of the subset of security CVEs that can be addressed by programming language safety

I can’t figure out what the author means by “programming language-caused CVE”. No analysis defines a “programming language-caused CVE”. They just look at CVEs and look at CWEs. The author invented this term but did not define it.

I can’t figure out what the author means by “of the subset of security CVEs that can be addressed by programming language safety”. First, aren’t all CVEs security CVEs? Why qualify the statement? Second, the very post the author cites ([1]) states:

> If you have a very large (millions of lines of code) codebase, written in a memory-unsafe programming language (such as C or C++), you can expect at least 65% of your security vulnerabilities to be caused by memory unsafety.

The figure is unqualified. But the author adds multiple qualifications. Why?

[1] https://alexgaynor.net/2020/may/27/science-on-memory-unsafet...

madmax96 commented on Total Functional Programming (2004) [pdf]   ncatlab.org/ufias2012/fil... · Posted by u/leonidasv
wredue · a year ago
And if I go to a flat earth conference, I will find that they produce lots of “proof” for flat earth.

I don’t really accept “this group of people who’s heads are super far up the ‘pure functions’ ass choose purity for their solutions” as “evidence” that purity is better.

I’m not saying that purity is bad by any stretch. I just consider it a tool that is occasionally useful. For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.

madmax96 · a year ago
>For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.

Modeling the method that modifies internal state as a function from old state to new state is the simplest way to accomplish this goal. I.e., preconditions and postconditions.

madmax96 commented on Total Functional Programming (2004) [pdf]   ncatlab.org/ufias2012/fil... · Posted by u/leonidasv
wredue · a year ago
>Well, why would it be taken as a given that programming should mimic the hardware?

Because that’s the environment that programs are run on, doing anything else is fighting against the environment. I would argue that humans have done this to great effect in some areas, but not in programming.

>Pure functions are easier to reason about (there may be exceptions of course), that's why they're interesting.

Prove it.

madmax96 · a year ago
>Prove it.

Proofs are good evidence that pure functions are easier to reason about. Many proof assistants (Coq, Lean, F*) use the Calculus of Inductive Constructions, a language that only has pure, total functions, as their theoretical foundation. The fact that state of the art tools to reason about programs use pure functions is a a pretty good hint that pure functions are a good tool to reason about behavior. At least, they're the best way we have so far.

This is because of referential transparency. If I see `f n` in a language with pure functions, I can simply lookup the definition of `f` and copy/paste it in the call site with all occurrences of `f`'s parameter replaced with `n`. I can simplify the function as far as possible. Not so in an imperative language. There could be global variables whose state matters. There could be aliasing that changes the behavior of `f`. To actually understand what the imperative version of `f` does, I have to trace the execution of `f`. In the worst case, __every time__ I use `f` I must repeat this work.

madmax96 commented on Total Functional Programming (2004) [pdf]   ncatlab.org/ufias2012/fil... · Posted by u/leonidasv
munchler · a year ago
But that's the true nature of division. Unless one can prove that the denominator is non-zero, then the computation is not well defined.

Practically speaking, languages like Haskell and F# have a monadic syntax that makes this considerably less ugly.

madmax96 · a year ago
Dependently typed languages make it even better.

For example, in Lean4:

  def myDiv (numerator : Nat) {denominator : Nat} (denominatorNotZero : denominator ≠ 0) : Nat
    :=
    if denominator > numerator then
      0
    else 
      1 + myDiv (numerator - denominator) denominatorNotZero
  
  -- Example usage.
  example : myDiv 1 (denominator := 1) (by simp) = 1 := rfl
  example : myDiv 120 (denominator := 10) (by simp) = 12 := rfl
You have to submit a proof that the denominator is non-zero in order to use `myDiv`. No monad required ;).

madmax96 commented on "You have a 27% 'AI' issue in here"   twitter.com/rustykitty_/s... · Posted by u/bundie
HDThoreaun · 2 years ago
I doubt oral examinations come back in general. The undergrad model relys on massive classes, I don’t see how you can do oral exams in those 300 person classes without massive cheating as the first students tell everyone else what to expect.
madmax96 · 2 years ago
It doesn’t matter if students know what to expect.

An oral exam isn’t the same as reading a written exam out loud. There are a set of learning outcomes defined in the syllabus. The examiner uses the learning outcomes to ask probing questions that start a conversation - emphasis on conversation. A conversation can’t be faked. A simple follow up question can reveal a student only has a shallow understanding of material. Or, the student could remember everything they’ve been told, but fail to make connections between learning outcomes. You can’t cram for an oral exam. You have to digest the course material and think about what things mean in context.

After all, students know what to expect on standardized tests. Some still do better than others :-).

madmax96 commented on Windows 11 vs. Ubuntu 23.10 Performance on the Lenovo ThinkPad P14s Gen 4   phoronix.com/review/think... · Posted by u/mfiguiere
madmax96 · 2 years ago
Why does Ubuntu almost always perform better on the vec GPU benchmarks but not the scalar benchmarks? Is there a trade off here?

u/madmax96

KarmaCake day809August 29, 2013View Original