It's starting off as a MacOS app because that's the machine I have. I didn't know Swift or SwiftUI when I started. I now know them somewhat, but the entire app has been vibe-coded. This has made it slow going. Very "1 step forward 2 steps back" until I switched from Claude Code to Codex and GPT-5.
I'm hoping to start an initial beta within the family in the next week or two, and then a wider round in January.
The main consensus is that people who illegally access content wouldn't have bought it otherwise, and that they still advertise it (thus, still driving up sales).
These studies have then been systematically strong-armed into silence by the EU and constituent countries' anti-piracy organisms.
This is probably because the war on piracy, too, is a billion-dollar industry. I'd be glad to blow it all up and give it all to the starving artists and their families.
That's why I can accept copyright even thought it's not perfect.
Would revenue / person-hour show a different trend? Because there are a lot of part-time and contract workers out there.
Guanfacine is also an alternative, and it's method of action also makes it anxiety reducing.
I tried the major ones (Adderall, Ritalin, Vyvanse, Concerta, etc.). They all made dealing with ADHD significantly easier, but even at the lowest doses they turned me into an extremely anxious and irritable person. I had never experienced anything close to a panic attack or nervous breakdown in my 30+ years of being alive until I started taking stimulant medication.
I decided that living with untreated ADHD was the better alternative, so now I'm back to copious amounts of coffee to deal.
How do you take 5-10ug? Dissolve 10mg in a litre of something. Get a 1ml dosing syringe. It has 0.1ml markings.
You could start there and increase it until you find what works. Also, if you take very little you can have a break on weekends and not suffer too much while remaining sensitive to lower dosages.
So then for them to determine fair use, they need the department of justice involved to say the access was illegal? since when. just to highlight the absurdity. “Illegal” meaning a terms of service violation despite the fact that everyone using the service can consume copyrighted works? This circles back to the now paradoxical issue about it not being copyright infringement to consume, but requires policing the terms of service by the copyright office which is impossible.
This is too paradoxical to even entertain, but thats why the office led with “current law”, because it is completely unaccommodating to a real social problem. A lot of artists and people are uncomfortable with the current law, and generative AI. New law could patch this except:
Artists don't actually like the generative AI that isn't trained on copyrighted works either.
The laws are going to change too slow and there are already models that fulfill the high bar that detractors started with.
New works that were specifically licensed for use in AI training and compensated.
The outcome is still the same. More people can express themselves. People with years of discipline are no longer needed.
By the time any law could actually address noncompliant models - to this new imagined standard - compliant models will already have obsoleted the same trade.
All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except as permitted by U.S. copyright law.
“Reproduced” and “electronic” are the relevant terms here.
I remember when gpt-3 came out and you could get it to spit out chunks of Harry Potter and I wondered why no-one was being sued.
The models are built on copyright infringement. Authors and publishers of any kind should be able to opt out of being included in training data and ideally opt-in should be the default.
And I hope one day someone trains a model without the use of works of fiction and we find a qualitative difference in their performance. Does a coding model really need to encode the customs, mores and concerns of Victorian era fictional characters to write a python function?
so i gave roo code a try, set a few test cases, and proceeded to declutter, refactor, rewrite the whole thing. i’ve never really written long apps in javascript nor typescript for that matter, and man, i just think 3k lines of code in a single file is just bad code, and i’ve been proven right. 3k lines fucks your context really good. you can’t use cline to code cline because it will ruin you financially one way or another. jesus fuckin’ christ the old cline.ts file was like responsible for the whole damn extension, over 3k lines, the kind of code i would write 10 years ago as an intern. anyway, i’ve added (and learned in the process) react.js components to have an interface to easily collect the data for my own loras. honestly if you are looking to integrate large local models into kilo, i’d love to collaborate. my forks mostly provide data analysis for the fine-tuning of my own personal repositories, using years of commit history as training data, even bash history. i’ve benchmarked several tasks. i can basically fork roo code or cline, declutter it, refactor it, with a gemma or qwq running in a mac studio for a few watts. i’ve been logging everything that i do ever since we were granted api access to gpt3 at a lab i coordinated about 5 years ago. so i’ve mastered the filtering of the completions api, reconstruction of streams, all using airflow and python scripts. i added a couple buttons such as the download task you’ve also added, but more along the lines of “send this to the batch in the datacenter so we train a new gemma” filtering good solutions vs not so good, the old thumbs up thumbs down situation, helps a lot, adding a couple of mcp integrations for applying quick loras locally, plus the addition of test driven development, aiming for reinforcement learning based loras. i built myself a very nice toy, or should i say, i bootstrapped a very nice tool that creates itself? anyway, thanks for sharing this.
i think the next major thing that is gonna happen with these tools is that it gets free at home as new chips become cheaper. llama 4 running in mac studios or dgx stations is as fast as you can get today and it is already good enough (if prepared correctly) to build any yc startup codebase from before covid, or even from before chatgpt, in a weekend. it will definitely happen. i’m wrapping fixing llama4 scout, allow me to mention the fact that it has a tendency to fix bugs by commenting code and adding TODOs, fucking great architecture though, just what we needed, i mean for optimal local development. i’ll try to publish results soon enough, optimized for the top mac studio though, haven’t got a dgx yet. i’ll prepare macbook versions too. the world needs more of this, a cline that fixes itself just on battery power...
most of the sites of this type i found annoying as you can't just use a midi keyboard, so you just get RSI clicking around for 10 minutes.
I tried getting adsense on it, but they seem to have vague content requirements. Apparently tools don't count as real websites :-(. I couldn't even fool it with fake content. what's the best banner ad company to use in this situation?