Readit News logoReadit News
paul_e_warner commented on Apple is open sourcing Swift Build   swift.org/blog/the-next-c... · Posted by u/dayanruben
paul_e_warner · a year ago
Reading this it’s not clear - how well integrated is swift build with swift’s tooling and language server? I know the language server has been open source for a while now. Having them be separate seems like it would create issues with duplicate code.
paul_e_warner commented on OpenAI says it has evidence DeepSeek used its model to train competitor   ft.com/content/a0dfedd1-5... · Posted by u/timsuchanek
vinni2 · a year ago
How would they prove they used it’s model. I would be curious to know their methodology. Also what legal actions OpenAI can take? can DeepSeek be banned in US?
paul_e_warner · a year ago
If you read the article (which I know no one does anymore)

>OpenAI and its partner Microsoft investigated accounts believed to be DeepSeek’s last year that were using OpenAI’s application programming interface (API) and blocked their access on suspicion of distillation that violated the terms of service, another person with direct knowledge said. These investigations were first reported by Bloomberg.

paul_e_warner commented on OpenAI says it has evidence DeepSeek used its model to train competitor   ft.com/content/a0dfedd1-5... · Posted by u/timsuchanek
blast · a year ago
Everyone is responding to the intellectual property issue, but isn't that the less interesting point?

If Deepseek trained off OpenAI, then it wasn't trained from scratch for "pennies on the dollar" and isn't the Sputnik-like technical breakthrough that we've been hearing so much about. That's the news here. Or rather, the potential news, since we don't know if it's true yet.

paul_e_warner · a year ago
I feel like which one you care about depends on whether you're an AI researcher or an investor.
paul_e_warner commented on OpenAI says it has evidence DeepSeek used its model to train competitor   ft.com/content/a0dfedd1-5... · Posted by u/timsuchanek
aprilthird2021 · a year ago
> the prevailing narrative ATM is that DeepSeek's own innovation was done in isolation and they surpassed OpenAI

I did not think this, nor did I think this was what others assumed. The narrative, I thought, was that there is little point in paying OpenAI for LLM usage when a much cheaper, similar / better version can be made and used for a fraction of the cost (whether it's on the back of existing LLM research doesn't factor in)

paul_e_warner · a year ago
There were different narratives for different people. When I heard about r1, my first response was to dig into their paper and it's references to figure out how they did it.
paul_e_warner commented on OpenAI says it has evidence DeepSeek used its model to train competitor   ft.com/content/a0dfedd1-5... · Posted by u/timsuchanek
paul_e_warner · a year ago
There seem to be two kinda incompatible things in this article: 1. R1 is a distillation o1. This is against it's terms of service and possibly some form of IP theft. 2. R1 was leveraging GPT-4 to make it's output seem more human. This is very common and most universities and startups do it and it's impossible to prevent.

When you take both of these points and put them back to back, a natural answer seems to suggest itself which I'm not sure the authors intended to imply: R1 attempted to use o1 to make its answers seem more human, and as a result it accidentally picked up most of it's reasoning capabilities in the process. Is my reading totally off?

paul_e_warner commented on AI, but at What Cost? Breakdown of AI's Carbon Footprint   loopbreaker.substack.com/... · Posted by u/tnahrf
strogonoff · a year ago
A few questions popped in my head. Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore? For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans? By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
paul_e_warner · a year ago
> Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore?

I mean, education will have to change. In the early years of computer science, the focus was on building entire systems from scratch. Now programming is mainly about developing glue between different libraries to suit are particular use case. This means that we need to understand far less about the theoretical underpinnings of computing (hence all the griping about why programmers don't need to write their own sorting algorithms, so why does every interview ask it).

It's not gone as a skill, it's just different.

>For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans?

So I had a eureka moment with AI programming a few weeks ago. In it, I described a basic domain problem in clear english language. It was revealing not just because of all the time it saved, but because it fundamentally changed how programming worked for me. I was, instead of writing code and developing my domain, I was able to focus my mind completely on one single problem. Now my experiences with AI programming have been much worse since then, but I think it highlights how AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote. I instead get to focus on the more fun parts. The fulfilling parts.

>By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)

I think the best precedent here is the start of the 20th century. In this period, elites were absolutely entrenched against the idea of things like increasing worker pay or granting their workers more rights or raising taxes. However, I believe one of the major turning points in this struggle worldwide was the revolution in Russia. Not because of the communist ideals it epoused, but because of the violence and chaos it caused. People, including economic elites, aren't marxist-style unthinking bots - they could tell that if they didn't do something about the desperation and poverty they had created, they would be next. So due to a combination of self interest, and yes, their own moral compasses, they made compromises with the radicals to improve the standard of living for the poor and common workers, who were mostly happy to accept those compromises.

Now, it's MUCH more complicated than I've laid out here. The shift away from the gilded age had been happening for nearly twenty years at that point. But I think it illustrates that concentrating economic power that doesn't trickle down is dangerous - creating constant social destruction with no reward will destroy themselves. And they will be smart enough to realize this.

paul_e_warner commented on Nvidia’s $589B DeepSeek rout   finance.yahoo.com/news/as... · Posted by u/rcarmo
plaidfuji · a year ago
Here’s a take I haven’t seen yet:

If training and inference just got 40x more efficient, but OpenAI and co. still have the same compute resources, once they’ve baked in all the DeepSeek improvements, we’re about to find out very quickly whether 40x the compute delivers 40x the performance / output quality, or if output quality has ceased to be compute-bound.

paul_e_warner · a year ago
Yes, but I think most of the rout is caused by the fact that there really isn't anything protecting AI from being disrupted by a new player - They're fairly simple technology compared to some of the other things tech companies build. That means openai really doesn't have much ability to protect it's market leader status.

I don't really understand why the stock market has decided this affects nvidia's stock price though.

paul_e_warner commented on AI, but at What Cost? Breakdown of AI's Carbon Footprint   loopbreaker.substack.com/... · Posted by u/tnahrf
brtkdotse · a year ago
> more like “ends justify the means at any cost”

Everyone is crystal clear on this, the goal is to replace expensive humans to increase profits.

paul_e_warner · a year ago
...how do you think you got your job? You ever see those old movies with rows of people with calculators manually balancing spreadsheets with pen and paper? We are the automators. We replaced thousands of formerly good paying jobs with computers to increase profits, just like replacing horses with cars or blacksmiths with factories.

The reality of AI, if AI succeeds in replacing programmers (and there's reason to be skeptical of that) is that it will simply be a "move up the value chain". Former programmers instead of developing highly technical skills will have new skills - either helping to make models that meet new goals or guiding those models to produce things that meet requirements. It will not mean all programmers are automatically unemployable - but we will need to change.

paul_e_warner commented on The Oracle Java Platform Extension for Visual Studio Code   inside.java/2023/10/18/an... · Posted by u/pjmlp
_chu1 · 2 years ago
What's wrong with it? I don't program really (at least yet) and Java actually seems like a neat language to learn.
paul_e_warner · 2 years ago
There's nothing really wrong with it, but it's core design is pretty outdated in ways that are difficult to fix without putting together a totally new language. Couple of examples

- Support for nullable values. Swift and Kotlin have first class support for these, which are meant to minimize or reduce the number of null pointers in your code. You can mostly fix this with annotations, but those your team using them very consistently, and are not supported by most java libraries.

- It's approach to concurrency. Java built in synchronization primitive is based upon an older model of concurrency whose name I cant remember but it involves every object basically being able to maintain internal consistency with it's own state. No one still uses it like this, with most method creating an `Object lock` as a synchronization primitive.

- Serialization. Java has built in binary serialization support that ended up being a massive security hole. Most people are now forced to use some json serialization, but the old serialization format is still lurking in the background to ensnare less knowledgeable programmers.

- Generics. Smarter people than me can probably give you more detail on this, but generics were grafted on to the language long after it was introduced, and it shows. At runtime, there are no generic types kept, meaning it is technically possible to break the generic typing of an object.

None of these it should be noted are deal breakers or reasons why you shouldn't use the language. Almost every single one has some form of workaround for it. But if you're not aware of them (or stuck with an older legacy codebase like a lot of people are) they can be major headaches that can just be avoided by using a more modern language.

paul_e_warner commented on Half a million lines of Go   blog.khanacademy.org/half... · Posted by u/nickcw
paul_e_warner · 5 years ago
I've written a bit of code in Go, and the problem I have with it is primarily it feels really outdated for a "modern" language - every time I use it I feel like I'm dealing with something written by someone who really hated Java in 2005. There are features that could be added to the language that would make it more readable and less error-prone without compromising the simplicity of the core language. Generics are the famous example, but the one that really gets me is the the lack of nullable type signatures. This is a great way to avoid an entire class of bugs that nearly every modern language I've used has evolved a solution for except Go.

Another issue I have is the reliance on reflection. In general, I think if you have to rely on reflection to do something, that usually means you're working around some inherent limitation in the normal language - and the resulting code is often far less readable than the code would be in a more expressive language. Lots of Go libraries and frameworks are forced to use it in a lot of cases because there's just no other way to express some really basic things without it.

I really want to like Go. There's a lot I like - the "only one way to do something" approach means that code always feels consistent. Errors as values is a far superior approach to exceptions. I had to write some for a job interview project a while back and it felt really refreshing, but every time I try to use it for a personal project, I don't feel like I'm getting anything out of it that I couldn't get out of say, Rust, or modern, typed Python.

u/paul_e_warner

KarmaCake day65November 25, 2020View Original