Readit News logoReadit News
WXLCKNO · 6 months ago
I'm working on a bunch of different projects trying out new stuff all the time for the past six months.

Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.

I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.

In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.

I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.

I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.

eloisant · 6 months ago
That's not really new, the small teams in the 2000's with web frameworks like Rails were able to do as a team of 5 what needed a 50 people team in the 90's. Or even as a week-end solo project.

What happened it that it because the new norm, and the window were you could charge the work of 50 people for a team of 5 was short. Some teams cut the prices to gain marketshare and we were back to usual revenue per employee. At some point nobody thought of a CRUD app with a web UI as a big project.

It's probably what will happen here (if AI does gives the same productivity boost as langages with memory management and web frameworks): soon your company with a small team of friends will not be seen by anyone as equivalent to 100 or 1000 people, even if you can achieve the same thing of a company that size a few years earlier.

MoonGhost · 6 months ago
That's what Amazon is doing. They simply increase the output norm and promise mass layoffs again. MS promises too, I'm not sure about details, but likely they don't cut the projects. Which means use of some sort of copilot is expected now.

The question is what happens to developers. Will they quit the industry or move to smaller companies?

ujkhsjkdhf234 · 6 months ago
Instagram was 13 employees before they were purchased by Facebook. The secret is most employees in a 1000 person company don't need to be there or cover very niche cases that your company likely wouldn't have.
WJW · 6 months ago
Don't fall for the lottery winner bias. Some companies just strike it rich, often for reasons entirely outside their control. That doesn't mean that copying their methods will lead to the same results.
sien · 6 months ago
YouTube had fewer than 70 employees when Google bought them in 2006.

With a good idea and good execution teams can be impressively small.

edanm · 6 months ago
> The secret is most employees in a 1000 person company don't need to be there or cover very niche cases that your company likely wouldn't have.

That is massively wrong, and frankly an insulting worldview that a lot of people on HN seem to have.

The secret is that some companies - usually ones focused on a single highly scalable technology product, and that don't need a large sales team for whatever reason - those companies can be small.

The majority of companies are more technically complex, and often a 1,000 person company includes many, many people doing marketing, sales, integrations with clients, etc.

charliebwrites · 6 months ago
> Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects

Can you give some examples? What’s worked well?

haiku2077 · 6 months ago
- Extremely strict linting and formatting rules for every language you use in a project. Including JSON, YAML, SQL.

- Using AI code gen to make your own dev tools to automate tasks. Everything from "I need a make target to automate updating my staging and production config files when I make certain types of changes" or "make an ETL to clean up this dirty database" to "make a codegen tool to automatically generate library functions from the types I have defined" and "generate a polished CLI for this API for me"

- Using Tilt (tilt.dev) to automatically rebuild and live-reload software on a running Kubernetes cluster within seconds. Essentially, deploy-on-save.

- Much more expansive and robust integration test suites with output such that an AI agent can automatically run integration tests, read the errors and use them to iterate. And with some guidance it can write more tests based on a small set of examples. It's also been great at adding formatted messages to every test assertion to make failed tests easier to understand

- Using an editor where an AI agent has access to the language server, linter, etc. via diagnostics to automatically understand when it makes severe mistakes and fix them

A lot of this is traditional programming but sped up so that things that took hours a few years ago now take literally minutes.

jprokay13 · 6 months ago
If you haven’t, adding in strict(er) linting rules is an easy win. Enforcing documentation for public methods is a great one imo.

The more you can do to tell the AI what you want via a “code-lint-test” loop, the better the results.

malux85 · 6 months ago
For us it’s been auto-generating tests - we focus efforts on having the LLM write 1 test, manually verifying it. Then use this as context and tell the llm to extend to all space groups and crystal systems.

So we get code coverage without all the effort, it works well for well defined problems that can be verified with test.

vachina · 6 months ago
At some point you'll lose that edge because you stop being able to differentiate yourself. If you can x10 with agents, others can too. AI will let you reach the "higher" low hanging fruits.
owebmaster · 6 months ago
Same thing as before, some people will x100 with agents while most will at maximum x10.
mensetmanusman · 6 months ago
Using agents is the new skill. AI will always be chasing a tail where some are 10-100x more efficient at using the tool chain as others.
ChrisMarshallNY · 6 months ago
A while back, someone here linked to this story[0].

It's a bit simplified and idealized, but is actually fairly spot-on.

I have been using AI every day. Just today, I used ChatGPT to translate an app string into 5 languages.

[0] https://www.oneusefulthing.org/p/superhuman-what-can-ai-do-i...

stephen_g · 6 months ago
Hopefully it’s better for individual strings, but I’ve heard a few native speakers of other languages (who also can speak English) complaining about websites now serving up AI-translated versions of articles by default. They are better than Google Translate of old, but apparently still bad enough that they’d much rather just be served the English original…

I guess similar to my experience with the AI voice translation YouTube has, I’ve felt similar - I’d rather listen to the original voice but with translated subtitles than a fake voice.

homebrewer · 6 months ago
Weblate has been doing that for any number of languages (up to 200 or however many it supports) for many years, using many different sources, including public translation memory reviewed by humans.

It can be plugged into your code forge and fully automated — you push the raw strings and get a PR with every new/modified string translated into every other language supported by your application.

I use its auto-translation feature to prepare quick and dirty translations into five languages, which lets you test right away and saves time for professional translators later — as they have told me.

If anyone is reading this, save yourself the time on AI bullshit and use Weblate — it's a FOSS project.

rs186 · 6 months ago
If someone reports a translation error, how do you verify and fix it? Especially those tricky ones that have no direct translation and require deep understanding of the language?
SwtCyber · 6 months ago
The "small but mighty" team model feels way more appealing to me too - less management overhead, more actual building
teaearlgraycold · 6 months ago
Definitely agree small teams are the way to go. The bigger the company the more cognitive dissonance is imposed on the employees. I need to work where everyone is forced to engage with reality and those that don’t are fired.
MoonGhost · 6 months ago
The thing is the luck as usually on the side of bigger battalions. Smaller teams don't have the rich and width of bigger companies. All in all we need the full spectrum from single person startup to mega corporations.

Deleted Comment

spacemadness · 6 months ago
I think we’re going to have to deal with the stories of shareholders wetting themselves over more layoffs more than we’re going to see higher quality software produced. Everyone is claiming huge productivity gains but generally software quality and new products being created seem at best unchanged. Where is all this new amazing software? It’s time to stop all the talk and show something. I don’t care that your SQL query was handled for you, thats not the bigger picture, that’s just talk.
delusional · 6 months ago
This has been an industry wide problem at silicon valley for years now. For all their talks of changing the world, what we've gotten the last decade has been taxi and hotel apps. Nothing truly revolutionizing.
csa · 6 months ago
> what we've gotten the last decade has been taxi and hotel apps. Nothing truly revolutionizing.

I’m not sure where you are from, but this is not my perspective from Northern California.

1. Apps in general, and Uber in particular, have very much revolutionized the part-time work landscape via gig work. There are plenty of criticisms of gig work if/when people try to do it full time, but as a replacement for part time work, it’s incredible. I always try to strike up a conversation with my uber drivers about what they like about driving, and I have gotten quite a few “make my own schedule” and “earn/save for special things” (e.g., vacations, hobby items, etc.). Many young people I know love the flexibility of the gig apps for part-time work, as the pay is essentially market rate or better for their skill set, and they get to set their own schedule.

2. AirBnB has revolutionized housing. It’s easier for folks to realize the middle class dream of buying an house and renting it out fractionally (by the room). I’ve met several people who have spun up a a few of these. Related, mid-term rentals (e.g., weeks or months rather than days or years) are much easier to arrange now than they were 20 years ago. AirBnBs have also created some market efficiency by pricing properties competitively. Note that I think that many of these changes are actually bad (e.g., it’s tougher to buy a house where I am), but it’s revolutionary nonetheless.

const_cast · 6 months ago
The best part is those two things have only gotten worse over time. Turns out, they were never really that good of an idea, they just had money to burn and legislative holes to exploit. Now Uber is more expensive than Taxis ever were, and AirBNB is virtually useless now that they have to play the same legal ballgame as hotels. Oh, and that one is more expensive too.

Tech companies forget that software is easy, the real world is hard. Computers are very isolated and perfect environments. But building real stuff, in meatspace, has more variables than anyone can even conceptualize.

Towaway69 · 6 months ago
An internet full of pointless advertising and the invention of adblocks to hide that advertising.

Digital devices that track everything you do, that then generated so much data that the advertising actually got worse. Thereby the data was collected with the promise that the adverts would get more appropriate.

Now comes AI to make sense of the data and the training data (I.e., the internet) is being swamped with AI content so that the training data for AIs is becoming useless.

I wonder what is being invented to remove all the AI content from the training data.

john2x · 6 months ago
The revolution is happening at the top.
NitroPython · 6 months ago
This really resonates with me, I want to see the bigger picture as well.
SwtCyber · 6 months ago
It's one thing to speed up little tasks, another to ship something truly innovative
exclipy · 6 months ago
I do see the AI agent companies shipping like crazy. Cursor, Windsurf, Claude Code... they are adding features as if they have some magical workforce of tireless AI minions building them. Maybe they do!
NitroPython · 6 months ago
Alot of what Cursor, windsurf etc. kinda just feels like the next logical step you take with the invention of LLM's but doesn't actually feel like the greater system of software has changed all that much except the pure volume one individual can produce now.
neom · 6 months ago
One area of business that I'm struggling in is how boring it is talking to an LLM, I enjoy standing at a whiteboard thinking through ideas, but more and more I see push for "talk to the llm, ask the llm, the llm will know" - The LLM will know, but I'd rather talk to a human about it. Also in pure business, it takes me too long to unlock nuances that an experienced human just knows, I have to do a lot of "yeah but" work, way way more than I would have to do with an experienced humans. I like LLMs and I push for their use, but I'm starting to find something here and I can't put my finger on what it is, I guess they're not wide enough to capture deep nuances? As a result, they seem pretty bad at understanding how a human will react to their ideas in practice.
andy99 · 6 months ago
It's not quite the same but since the dawn of smartphones, I've hated it when you ask a question, as a discussion starter or to get people's views, and some jerk reads off the wikipedia answer as if that's some insight I didn't know was available to me, and basically ruins the discussion.

I know talking to an llm is not exactly parallel, but it's a similar idea, it's like talking to the guy with wikipedia instead of batting back and forth ideas and actually thinking about stuff.

james_marks · 6 months ago
This has peaked in my circles, thankfully. Now it’s considered a bit of a faux pas to look up an answer during a discussion, for exactly this reason.
flowerthoughts · 6 months ago
I recently had a related issue where I was explaining an idea I'm working on, and one of my mates were engaging in creative thinking. The other found something he could do: look up a Chinese part to buy. He spent quite a few minutes on his phone, and then exclaimed "The hardware is done!" The problem is what he found was incomplete and wrong.

So he missed out on the thing we should do when being together: talk and brainstorm, and he didn't help with anything meaningful, because he didn't grasp the requirements.

seabombs · 6 months ago
Some of my colleagues will copy/paste several paragraphs of LLM output into ongoing slack discussions. Totally interrupts the flow of ideas. Shits me to tears.

Deleted Comment

sheepscreek · 6 months ago
I know what you mean. Also, the more niche your topic the more outright wrong LLMs tend to be. But for white-boarding or brainstorming - they can actually be pretty good. Just make sure you’re talking to a “large” model - avoid the minis and even “Flash” models like the plague. They’ve only ever disappointed me.

Adding another bit - the multi-modality brings them a step closer to us. Go ahead and use the physical whiteboard, then take a picture of it.

Probably just a matter of time before someone hooks up Excalidraw/Miro/Freeform into an LLM (MCPs FTW).

avoutos · 6 months ago
My experience has been similar. I can't escape the feeling that these LLMs are weighted down by their training data. Everything seems to me generically intelligent at best.
handfuloflight · 6 months ago
With the LLM, you're free to ask any question without worrying about what the other party might think of you for asking that question.
PixyMisa · 6 months ago
Whereas with humans, you'll get valuable pushback for ideas that have already failed.
potamic · 6 months ago
The LLM will know. One day they will form a collective intelligence and all the LLMs will know and use it against you. At least with a human, you can avoid that one person...
andrew_lettuce · 6 months ago
Which is the Hallmark of a great teammate, but then we won't need them anymore
SebastianKra · 6 months ago
Thinking things through is desirable. But in many discussions both sides basically "vibe-out" what they think the objective truth is. If it's a fact that can be looked up, just get your phone and don't stall the discussion with guessing games.
ricw · 6 months ago
Just do both? Need an adequate network for that though which new school ai vibe entrepreneurs might lack…
neom · 6 months ago
Both indeed. I'm older, I do consulting, often to the new school AI CEOs and they keep thinking I'm nuts for saying we should bring in this person to talk to about this thing...I've tried to explain to a few folks now a human would be much better in this loop, but I have no good way to prove it as it's just experience.

I've noticed across the board, they also spend A LOT of time getting all the data into LLMs so they can talk to them instead of just reading reports, like bro, you don't understand churn fundamentally, why are you looking at these numbers??

nemothekid · 6 months ago
I'm not entirely convinced this trend is because AI is letting people "manage fleets of agents".

I do think the trend of the tiny team is growing though and I think the real driver were the laysoffs and downsizings of 2023. People were skeptical if Twitter would survive Elon's massive staff cuts and technically the site has survived.

I think the era of the 2016-2020 empire building is coming to an end. Valuing a manager on their number of reports is now out of fashion and theres now no longer any reason to inflate team sizes.

simonw · 6 months ago
I think the productivity improvement you can get just from having a decent LLM available to answer technical questions is significant enough already even without the whole Agent-based tool-in-a-loop thing.

This morning I used Claude 4 Sonnet to figure out how to build, package and ship a Docker container to GitHub Container Registry in 25 minutes start to finish. Without Claude's help I would expect that to take me a couple of hours at least... and there's a decent chance I would have got stuck on some minor point and given up in frustration.

Transcript: https://claude.ai/share/5f0e6547-a3e9-4252-98d0-56f3141c3694 - write-up: https://til.simonwillison.net/github/container-registry

nemothekid · 6 months ago
I'm not denying LLMs are useful. I believe the trend was going to happen whether regardless of how useful LLMs are.

AI ended up being a convenient excuse for big tech to justify their layoffs, but Twitter already painted a story about how bloated some organizations were. Now that there is no longer any status in having 9,001 reports the pendulum has swing the other way - it's now sexy to brag about how little people you employ.

homebrewer · 6 months ago
Their boilerplate works out of the box, you don't need to change anything. I recently packaged, signed, and published an OCI container into ghcr for the first time, it took about 5 to 10 minutes without touching any LLMs thanks to the quality of their documentation.
jordanb · 6 months ago
Eh I felt that way about the internet in 2010s. Seemed like virtually any question could be answered by a google query. People were making jokes that a programmer's job mostly consisted of looking things up on stack overflow. But then google started sucking and SO turned into another expertsexchange (which was itself good in the 2000s).

So far from what I've experienced AI coding agents automate away the looking things up on SO part (mostly by violating OSS licenses on Github). But that part is only bad because the existing tools for doing that were intentionally enshitified.

data-ottawa · 6 months ago
Conceptually I find LLMs/AI broaden my skillset but slow down any processes that are deep in specific knowledge and context.

It is really nice to have that, it raises the floor on the skills I'm not good at.

TZubiri · 6 months ago
"and technically the site has survived."

Only if you squint. If you look at the quality of the site, it has suffered tremendously.

The biggest "fuck you" are phishers buying blue checkmarks and putting the face of the CEO and owner to shill scams. But you also have just extremely trash content and clickbaits consistently getting (probably botted) likes and appearing in the top of feeds. You open a political thread and somehow there's a reply of a bear driving a bicycle as the top response.

Twitter is dead, just waiting for someone to call it.

data-ottawa · 6 months ago
Those are almost all management decisions, I expected a lot more crashes and security issues and a general inability to ship without taking things down.
relativ575 · 6 months ago
Huh? Look at the hottest topic at the moment:

https://www.twz.com/news-features/u-s-has-attacked-irans-nuc...

and see for yourself if Twitter is dead.

gedy · 6 months ago
> Valuing a manager on their number of reports is now out of fashion

I highly doubt human nature has changed enough to say that. It's just a down market.

SwtCyber · 6 months ago
The whole "empire building" mindset definitely feels outdated now - nobody's impressed by how many direct reports you have anymore
heraldgeezer · 6 months ago
So I can't hide in the masses watching Netflix anymore?
jayd16 · 6 months ago
Yeah but I think it's more that the money isn't there to throw bodies at an ok idea and hope you can turn revenue into profit down the line.

...unless you're shoveling AI itself, I guess.

apical_dendrite · 6 months ago
When I worked at a startup that tried to maximize revenue per employee, it was an absolute disaster for the customer. There was zero investment in quality - no dedicated QA and everyone was way too busy to worry about quality until something became a crisis. Code reviews were actively discouraged because it took people off of their assigned work to review other people's work. Automated testing and tooling were minimal. If you go to the company's subreddit, you'll see daily posts of major problems and people threatening class-action lawsuits. There were major privacy and security issues that were just ignored.
raincole · 6 months ago
So did revenue per employee increase?
golergka · 6 months ago
Really depends on the type of business you're in. In the startup I work in, I worked almost entirely on quality of service for the last year, rarely ever on the new features — because users want to pay for reliability. If there's no investment in quality, then either the business is making a stupid decision and will pay for it, or users don't really care about it as much as you think.
ldjkfkdsjnv · 6 months ago
Theres two types of software, the ones no one uses, and the ones people complain about
apical_dendrite · 6 months ago
I've worked at a number of companies - the frequency and seriousness of customer issues was way beyond anything I've experienced anywhere else.
hackable_sand · 6 months ago
Everyone should just write their own software then.
geremiiah · 6 months ago
AI helps you cook code faster, but you still need to have a good understanding of the code. Just because the writing part is done quicker doesn't mean a developer can now shoulder more responsibility. This will only lead to burn out, because the human mind can only handle so much responsibility.
crystal_revenge · 6 months ago
> but you still need to have a good understanding of the code

I've personally found this is where AI helps the most. I'm often building pretty sophisticated models that also need to scale, and nearly all SO/Google-able resources tend to be stuck at the level of "fit/predict" thinking that so many DS people remain limited to.

Being able to ask questions about non-trivial models as you build them, really diving into the details of exactly how certain performance improvements work and what trade offs there are, and even just getting feed back on your approach is a huge improvement in my ability to really land a solid understanding of the problem and my solution before writing a line of code.

Additionally, it's incredibly easy to make a simple mistake when modeling a complex problem and getting that immediate feedback is a kind of debugging you can otherwise only get on teams with multiple highly-skill people on them (which at a certain level is a luxury reserved only for people working a large companies).

For my kind of work, vibe-coding is laughably awful, primarily because there aren't tons of examples of large ML systems for the relatively unique problem you are often tasked with. But avoiding mistakes in the initial modeling process feels like a super power. On top of that, quickly being able to refactor early prototype code into real pipelines speeds up many of the most tedious parts of the process.

sothatsit · 6 months ago
I agree in a lot of ways, but I also feel nervous that AI could lull me into a false sense of security. I think AI could easily convince you that you understand something when really you don't.

Regardless, I do find that o3 is great at auditing my plans or implementations. I will just ask "please audit this code" and it has like a 50% hit rate on giving valuable feedback to improve my work. This feels like it has a meaningful impact on improving the quality of the software that I write, and my understanding of its edge cases.

hnthrow90348765 · 6 months ago
They often combine front end and back end roles (and sometimes sysadmin/devops/infrastructure) into one developer, so now I imagine they'll use AI to try and get even more. Burnout be damned, just going by their history.
bluefirebrand · 6 months ago
> Just because the writing part is done quicker

The writing part was never the bottleneck to begin with...

Figuring out what to write has always been the bottleneck for code

AI doesn't eliminate that. It just changes it to figuring out if the AI wrote the right thing

Towaway69 · 6 months ago
Humans hate to think and make decisions, they like being told what to do.

So having an AI doing the dangerous part of thinking, leaves humans to do what they do best: follow orders.

Even better AI will take on the responsibility when anything fails: just get the AI to fix it, after all AI coded the mistake.

satvikpendem · 6 months ago
I read a few books the other day, The Million-dollar, One-person Business and Company of One. They both discuss how with the advances of code (to build a product with), the infrastructure to host them (with AWS so that you don't need to build data centers), and the network of people to sell to (the Internet in general, and more specifically social media, both organic and ads-based), the likelihood of running a large multi-million-dollar company all by yourself greatly increases in a way it has never done in the history of humanity before.

They were written before the advent of ChatGPT and LLMs in general, especially coding related ones, so the ceiling must be even greater now, and this is doubly true for technical founders, for LLMs aren't perfect and if your vibed code eventually breaks, you'll need to know how to fix it. But yes, in the future with agents doing work on your behalf, maybe your own work becomes less and less too.

_1tem · 6 months ago
There are already several million-dollar companies of one. Pieter Levels is one such famous builder on X. CertifyTheWeb.com is another one man millionaire product on HN.
satvikpendem · 6 months ago
Yes, Levels and many others are already covered in those books.