Since Claude Code is cli based, I reviewed my cli toolset: Migrated from iTerm2 to Ghostty and Tmux, from Cursor to NeoVim (my God is it good!).
Just had a 14h workday with this tooling. It’s so good that I complete the work of weeks and months within days! Absolutely beast.
At this point I am thinking IDEs do not reflect the changing reality of software development. They are designed for navigating project folders, writing / changing files. But I don’t review files that much anymore. I rather write prompts, watch Claude Code create a plan, implement it, even write meaningful commit messages.
Yes I can navigate the project with neovim, yes I can make commits in git and in lazygit, but my task is best spent in designing, planning, prompting, reviewing and testing.
Say no more.
Remote: Yes, Open to Hybrid and In-person to
Willing to relocate: Yes, depends on the offer
Technologies: Typescript, Angular, React, PHP, Laravel, Javascript, CSS, Flutter
Résumé/CV: [https://drive.google.com/file/d/1v1aK0uTlzlPCOFOvJ9SDhtA8pox...
Email: ms.vignesh31 [at] gmail [dot]com
I'm flexible with the tech stack and open to learn anything I don't know and would like to work in end to end solutions and solving technical problems.
Thoughtful AI is a health-tech company building solutions for revenue cycle management (RCM). We're solving immediate problems in healthcare administration by speeding up payment cycles for providers and increasing accuracy in healthcare claim approvals. Something we can brag about a little is that there's so much demand for our solutions that we're throttling our sales team so our engineering can keep up. Come help us solve problems in healthcare!
Forward Deployed Engineer | $140k - $190k | https://www.thoughtful.ai/job?gh_jid=4488592005
Staff Software Engineer, Platform | $190k - $250k | https://www.thoughtful.ai/job?gh_jid=4467861005
Staff Software Engineer, Applied AI | $190k - $250k | https://www.thoughtful.ai/job?gh_jid=4470319005
We review every application.
There’s a lot of false dichotomy in the ongoing discussions here which assumes there’s a binary “control to the gov” or “freedom for all” choice. It’s a spectrum where at the most basic level, a robust process to handle reports of illegal activity should be accepted.
There are only very few illegal things that can happen within the telegram app like fraud, or minor abuse. Those must be reported by end users and individual actions can be taken against them.
What the government is asking is a massive backdoor for surveillance in the name of preventing crime, but they decide what they can monitor. It is a pandora box and if you open it there is no going back. Even if the current government is asking it with purest intentions and manage it well, the same can not be said for any next elected governments.
People who use third party apps are outliers. They do not make Reddit any money so Reddit would be glad to be rid of them while also saving on API costs. No other online service allows freeloading millions upon millions of API requests in addition to allowing third party apps to a large extent. Facebook, Whatsapp, Discord, Instagram, Slack, Twitter all don't, so it's a wonder that Reddit did all this time, for free.
If subreddits go dark for 48 hours, then from the perspective of Reddit, they'd think, great, then it returns to normal. If they go dark indefinitely, Reddit admins will wait a few days then force the subs open and demod everyone involved. There is a long list of mods who are willing to contribute.
In the end, consumer boycotts like this simply don't work.
I do agree with you. Bu the outliers usually feed the site with good content. There is a quote by Paul G (or someone)
> whatever the hackers do on the weekends, normal people will be doing regularly in 10 years.
This is a recital from memory, prone to be watered down. But the idea is that the outliers are the ones that makes it worthwhile to be in the community. If they are gone because for any reason, landed somewhere else, the normal users will eventually move away. It is not going to be visible immediately, but will have negative effects in the long run.
Came across this book randomly on Twitter and picked it up. The book is broken into 26 essays about significant pieces of code (defined vaguely), ranging from the Morris Worm to Pagerank to the popup window and the 1x1 invisible gif and how these shaped the modern tech landscape. Lovely read overall, and really shows how pieces of code you work on today can end up having long lasting impact on how society perceives technology as a whole. Best of all, it's not a heavy read, but offers a lot of concise info that can send you down wormholes of wikipedia.
Paper: Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service [2]
Database systems have always been a passion of mine, and the paper from AWS about how DynamoDB works internally is an incredible look into what makes a NoSQL DB platform capable of serving 89 million requests per second _(this is in the intro)_ which is incredible scale. Always good to see how engineering decisions shape products, and it's been interesting to see Dynamo take shape over the last decade _(though I recommend most folks to stay away from it because of it's mad pricing)_
[1]: https://www.goodreads.com/book/show/60254955-you-are-not-exp...
> LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over. This is exactly the opposite of what I am looking for. Software engineers test their work as they go. When tests fail, they can check in with their mental model to decide whether to fix the code or the tests, or just to gather more data before making a decision. When they get frustrated, they can reach for help by talking things through. And although sometimes they do delete it all and start over, they do so with a clearer understanding of the problem.
My experiences are based on using Cline with Anthropic Sonnet 3.7 doing TDD on Rails, and have a very different experience. I instruct the model to write tests before any code and it does. It works in small enough chunks that I can review each one. When tests fail, it tends to reason very well about why and fixes the appropriate place. It is very common for the LLM to consult more code as it goes to learn more.
It's certainly not perfect but it works about as well, if not better, than a human junior engineer. Sometimes it can't solve a bug, but human junior engineers get in the same situation too.
Not really, I would say they used it well and understood the limitations of LLM exactly. No matter how much polished the output or how good it is, LLMs can't build mental models of a codebase like a human does because they are just statistical machines.