Readit News logoReadit News
tricorn commented on No right to relicense this project   github.com/chardet/charde... · Posted by u/robin_reala
skeledrew · 8 days ago
The human is still at best a co-author, as the primary implementation effort isn't theirs. And I think effort involved is the key contention in these cases. Yesterday ideas were cheap, and it was the execution that matters. Today execution is probably cheaper than ideas, but things should still hold.
tricorn · 6 days ago
No, effort is explicitly not a factor in copyright. It was at one point, but "sweat of the brow" doctrine went away in Feist Publications in 1991, at least in the US.
tricorn commented on No right to relicense this project   github.com/chardet/charde... · Posted by u/robin_reala
tricorn · 6 days ago
Copyright protects "authorship", not functionality. Patent protects functional elements.

A rewrite based on functional equivalency is not infringing on the copyright as long as no creative expression was copied. That was the heart of the Google case, whether the API itself was creative expression or functionality.

There are many aspects to what can be considered creative expression, including names, organization, non-functional aspects. An algorithm would not be protected expression. If an AI can write it without reference to the original source code, using only documented behavior, then it would not be infringing (proving that it didn't copy anything from training data might be tough though). It also would not itself be copyrightable, except for elements that could be traced back as "authorship" to the humans who worked with the AI.

If LLMs can create GOOD software based only on functionality, not by copying expression, then they could reproduce every piece of GPL software and release it as Public Domain (which it would have to be if no human has any authorship in it). By the same principle that the GPL software wasn't infringing on the programs they copied functionality from, neither would the AI software. That's a big IF at this point, though, the part about producing GOOD software without copying.

tricorn commented on The Eternal Promise: A History of Attempts to Eliminate Programmers   ivanturkovic.com/2026/01/... · Posted by u/dinvlad
tricorn · 10 days ago
I think your example highlights one of the places where even the current level of AI can be helpful and enabling, rather than a competitor for jobs, which is helping a person learn something new. Not always in all subjects (do NOT learn to fly a plane solely by AI, I say this as a flight instructor), and the person has to be careful to verify accuracy, but still it can be amazingly useful, and endlessly patient.
tricorn commented on What are the best coping mechanisms for AI Fatalism?    · Posted by u/johnb95
tricorn · 14 days ago
Serious answer to the base question: learn as much as you can about how it all works, learn how to use it in its current state, keep up as it changes, be prepared for sudden leaps in the technology. Do not underestimate it.

Yes, there is a lot of hype, wailing, gnashing of teeth, but if it is good enough to be a worry, it is also good enough to empower the individual to survive it.

Ultimately, if it is all hype, it will soon crumble; if it is not then productivity will increase by leaps and bounds. The only key issue is to make sure that all the gains aren't taken by a small group of people (whether the current rich and powerful, or those that displace them using new paradigms).

I suggest getting comfortable with the idea of a UBI.

tricorn commented on Nanolang: A tiny experimental language designed to be targeted by coding LLMs   github.com/jordanhubbard/... · Posted by u/Scramblejams
tricorn · 2 months ago
Use Forth to create Lisp, implement Tcl in Lisp, then create Smalltalk using Tcl, then build Forth in Smalltalk. But wait, I'm just getting started!
tricorn commented on Show HN: GlyphLang – An AI-first programming language    · Posted by u/goose0004
tricorn · 2 months ago
Don't optimize the language to fit the tokens, optimize the tokens to fit the language. Tokenization is just a means to compress the text, use a lot of code in target languages to determine the tokenizing, then do the training using those tokens. More important is to have a language where the model can make valid predictions of what effective code will be. Models are "good" at Python because they see so much of it. To determine what language might be most appropriate for an AI to work with, you'd need to train multiple models, each with a tokenizer optimized for a language and training specifically targeting that language. One language I've had good success with, despite having low levels of training in it, is Tcl/Tk. As the language is essentially a wordy version of Lisp (despite Stallman's disdain for it), it is extremely introspective with the ability to modify the language from within itself. It also has a very robust extensible sandbox, and is reasonably efficient for an interpreted language. I've written a scaffold that uses Tcl as the sole tool-calling mechanism, and despite a lot of training distracting the model towards Python and JSON, it does fairly well. Unfortunately I'm limited in the models I can use because I have an old 8GB GPU, but I was surprised at how well it manages to use the Tcl sandbox with just a relatively small system prompt. Tcl is a very regular language with a very predictive structure and seems ideal for an LLM to use for tool calling, note taking, context trimming, delegation, etc. I haven't been working on this long, but it is close to the point where the model will be able to start extending its own environment (with anything potentially dangerous needing human intervention).
tricorn commented on Claude Memory   anthropic.com/news/memory... · Posted by u/doppp
awesome_dude · 5 months ago
Note to everyone - sharing what works leads to complete morons telling you their interpretation... which has no relevance.

Apparently they know better even though

1. They didn't issue the prompt, so they... knew what I was meaning by the phrase (obviously they don't)

2. The LLM/AI took my prompt and interpreted it exactly how I meant it, and behaved exactly how I desired.

3. They then claim that it's about "knowing exactly what's going on" ... even though they didn't and they got it wrong.

This is the advantage of an LLM - if it gets it wrong, you can tell it.. it might persist with an erroneous assumption, but you can tell it to start over (I proved that)

These "humans" however are convinced that only they can be right, despite overwhelming evidence of their stupidity (and that's why they're only JUNIORS in their fields)

tricorn · 5 months ago
There are problems with either approach, because an LLM is not really thinking.

Always starting over and trying to get it all into one single prompt can be much more work, with no better results than iteratively building up a context (which could probably be proven to sometimes result in a "better" result that could not have been achieved otherwise).

Just telling it to "forget everything, let's start over" will have significantly different results than actually starting over. Whether that is sufficient, or even better than alternatives, is entirely dependent on the problem and the context it is supposed to "forget". If your response had been "try just telling it to start over, it might work and be a lot easier than actually starting over" you might have gotten a better reception. Calling everyone morons because your response indicates a degree of misunderstanding how an LLM operates is not helpful.

tricorn commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
polotics · 8 months ago
Can you translate "SFT'd" and "TTFT" and "TESCREAL" for the less clued-in members of the audience? On "one ine itabalism" I just gave up.
tricorn · 8 months ago
I just selected some of the text and my browser told me what they meant along with some background and some links for more information. The "one ine itabilism" actually found this conversation as a reference ...
tricorn commented on Zuckerberg approved training Llama on LibGen [pdf]   storage.courtlistener.com... · Posted by u/stefan_
wesapien · a year ago
Isn't UBI just going to raise inflation? People who don't need it will claim it and use the existing tax loopholes. Tax laws will need to be rewritte.
tricorn · a year ago
No, you can do a UBI that keeps the money supply the same, and use it as a way to stabilize the economy. With a $2000/mo UBI, 50% flat tax on other income, 25% VAT, phase it in by doing 10% of that the first year (and 90% of your current taxes, 90% of current support payments), second year 20% and 80%, so the impact isn't too disruptive. Adjust the flat tax rate as the Federal budget changed (a spending bill is automatically a tax bill as well). Adjust the VAT to control inflation.
tricorn commented on Zuckerberg approved training Llama on LibGen [pdf]   storage.courtlistener.com... · Posted by u/stefan_
wesapien · a year ago
UBI has less friction as far as implementation since we don't need qualify anyone. With AI, we can afford to have that extra step (nuance) and be able to make sure its a needs based approach. The future requires various combinations of changes. Fix the tax system and then UBI (in this specific order) OR !UBI (needs based distribution).
tricorn · a year ago
Implement UBI as part of fixing taxes. A UBI combined with a flat tax plus a national sales tax, and including universal healthcare, can continue to be a progressive tax while eliminating a lot of the overhead of keeping track of it all. Look at the effective tax rates with a 50% flat tax, 25% sales tax, and $2000 per month UBI with UHC.

u/tricorn

KarmaCake day9January 12, 2025View Original