Realistically the legislation was only targeting Apple. If consumers want USB-C, then they can vote with their wallets and buy an Android, which is a reasonable alternative.
Deleted Comment
Realistically the legislation was only targeting Apple. If consumers want USB-C, then they can vote with their wallets and buy an Android, which is a reasonable alternative.
I don't think I'll make my 2030 date at this point but there might be some version of Windows like this at some point.
I also recognize that Windows' need to remain backwards compatible might prevent this, unless there's a Rosetta-style emulation layer to handle all the Win32 APIs etc..
The average end user will be using some sort of Tivoized device, which will be running a closed-source fork of an open-source kernel, with state-of-the-art trusted computing modules making sure nobody can run any binaries that weren't digitally signed and distributed through an "app store" owned by the device vendor and from which they get something like a 25% cut of all sales.
In other words, everything will be a PlayStation, and Microsoft will be selling their SaaS services to enterprise users through those. That is my prediction.
The proposition in this article is extremely simple. Adobe Flash would have compromised the end-user experience on iOS devices, so it wasn't allowed into the walled garden. All the lip service he is paying to open source here reeks of PR bullshit. They are perhaps the worst offender, and the furthest removed from the idea of an "open web", of all the silicon valley companies.
Pretending that WebKit was an open source project born at Apple, for the sake of contributing to the open web or whatever, is straight up trashy. Besides this failure to attribute, one could argue that Apple needed to comply with the LGPL when they forked KHTML and KJS. I see no reason to believe that a fully original browser engine born at Apple would be open source.
You can make some kind of argument from this that Linux has won; certainly the Linux syscall API is now perhaps the most ubiquitous application API.
Go prior to modules worked really well this way and I believe Ubuntu was using it like this, but the Go authors came out against using it as a scripting language like this.
Maybe this will make it seem like a more viable approach.
I can see it using some form of PEFT so that the output becomes consistent with both the setting and the characters and then it is about generating over and over each short segment until you are happy with the outcome. Then you stitch them together and if you don't like some part you can always try to re-generate them, change the prompt, ...
Think of all of your favorite novels that are deemed "impossible" to adapt to the screen.
Or think of all the brilliant ideas for films that are destined to die in the minds of people who will never, ever have the luck or connections required to make it to Hollywood.
When this stuff truly matures and gets commoditized I think we are going to see an explosion of some of the most mind blowing art.
I’ve spent time at small startups and on “elite” big tech teams, and I’m usually the only one on my team using a debugger. Almost everyone in the real world (at least in web tech) seems to do print statement debugging. I have tried and failed to get others interested in using my workflow.
I generally agree that it’s the best way to start understanding a system. Breaking on an interesting line of code during a test run and studying the call stack that got me there is infinitely easier than trying to run the code forwards in my head.
Young grugs: learning this skill is a minor superpower. Take the time to get it working on your codebase, if you can.