If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
…What am I even reading? Am I crazy to think this is a crazy thing to say, or it’s actually crazy?
The more nuanced "outrage" here, how taking humans out of the agent loop is, as I have commented elsewhere, quite flawed TBH and very bold to say the least. And while every VC is salivating, more attention should instead be given to all the AI Agent PMs, The Tech lead of AI, or whatever that title is on some of the following:
- What _workflow_ are you building? - What is your success with your team/new hires in having them use this? - What's your RoC for investment in the workflow? - How varied is this workflow? Is every company just building their own workflows or are there patterns emerging on agent orchestration that are useful.
The only github I could find is: https://github.com/strongdm/attractor
Building Attractor
Supply the following prompt to a modern coding agent
(Claude Code, Codex, OpenCode, Amp, Cursor, etc):
codeagent> Implement Attractor as described by
https://factory.strongdm.ai/
Canadian girlfriend coding is now a business model.Edit:
I did find some code. Commit history has been squashed unfortunately: https://github.com/strongdm/cxdb
There's a bunch more under the same org but it's years old.
Lets start with the `/research -> /plan -> /implement(RPI)`. When you are building a complex system for teams you _need_ humans in the loop and you want to focus on design decisions. And having structured workflows around agents provides a better UX to those humans make those design decisions. This is necessary for controlling drift, pollution of context and general mayhem in the code base. _This_ is the starting thesis around spec drive development.
How many times have you working as a newbie copied a slash command pressed /research then /plan then /implement only to find it after several iterations is inconsistent and go back and fix it? Many people still go back and forth with chatgpt copying back and forth copying their jira docs and answering people's question on PRD documents. This is _not_ a defence it is the user experience when working with AI for many.
One very understandable path to solve this is to _surface_ to humans structured information extracted from your plan docs for example:
https://gist.github.com/itissid/cb0a68b3df72f2d46746f3ba2ee7...
In this very toy spec driven development the idea is that each step in the RPI loop is broken down and made very deterministic with humans in the loop. This is a system designed by humans(Chief AI Officer, no kidding) for teams that follow a fairly _customized_ processes on how to work fast with AI, without it turning into a giant pile of slop. And the whole point of reading code or QA is this: You stop the clock on development and take a beat to see the high signal information: Testers want to read tests and QAers want to test behavior, because well written they can tell a lot about weather a software works. If you have ever written an integration test on a brownfield code with poor test coverage, and made it dependable after several days in the dark, you know what it feels like... Taking that step out is what all VCs say is the last game in town.. the final game in town.
This StrongDM stuff is a step beyond what I can understand: "no humans should write code", "no humans should read code", really..? But here is the thing that puzzles me even more is that spec driven development as I understand it, to use borrowed words, is like parents raising a kid — once you are a parent you want to raise your own kid not someone else's. Because it's just such a human in the loop process. Every company, tech or not, wants to make their own process that their engineers like to work with. So I am not sure they even have a product here...
> Can you find an academic article that _looks_ legitimate -- looks like a real journal, by researchers with what look like real academic affiliations, has been cited hundreds or thousands of times -- but is obviously nonsense, e.g. has glaring typos in the abstract, is clearly garbled or nonsensical?
It pointed me to a bunch of hoaxes. I clarified:
> no, I'm not looking for a hoax, or a deliberate comment on the situation. I'm looking for something that drives home the point that a lot of academic papers that look legit are actually meaningless but, as far as we can tell, are sincere
It provided https://www.sciencedirect.com/science/article/pii/S246802302....
Close, but that's been retracted. So I asked for "something that looks like it's been translated from another language to english very badly and has no actual content? And don't forget the cited many times criteria. " And finally it told me that the thing I'm looking for probably doesn't exist.
For my tastes telling me "no" instead of hallucinating an answer is a real breakthrough.
The location might still be on your disk if you can pull up the original Claude JSOn and put it through some `jq` and see what pages it went through to give you and what it did.
There is Microsoft Copilot, which replaced Bing Chat, Cortana and uses OpenAI’s GPT-4 and 5 models.
There is Github Copilot, the coding autocomplete tool.
There is Microsoft 365 Copilot, what they now call Office with built in GenAI stuff.
There is also a Copilot cli that lets you use whatever agent/model backend you want too?
Everything is Copilot. Laptops sell with Copilot buttons now.
It is not immediately clear what version of Copilot someone is talking about. 99% of my experience is with the Office and it 100% fails to do the thing it was advertised to do 2 years ago when work initially got the subscription. Point it a SharePoint/OneDrive location, a handful of excel spreadsheets and pdfs/word docs and tell it to make a PowerPoint presentation based on that information.
It cannot do this. It will spit out nonsense. You have to hold it by the hand tell it everything to do step by step to the point that making the PowerPoint presentation yourself is significantly faster because you don’t have to type out a bunch of prompts and edit it’s garbage output.
And now it’s clear they aren’t even dogfooding their own LLM products so why should anyone pay for Copilot?
One thing that I don't know about is if they have an AI product that can work on combining unstructured and databases to give better insights on any new conversation? e.g. like say the LLM knows how to convert user queries to the domain model of tables and extract information? What companies are doing such things?
This would be something that can be deployed on-prem/ their own private cloud that is controlled by the company, because the data is quite sensitive.
We live in a reasonably dense suburb. Police showed up at our front door and asked to speak with him. They just wanted to make sure he was doing OK. He asked them "how did you find me?" and their response was just "we pinged your phone".
Watching my security camera, they did not stop at any of my neighbors houses first. It was very direct to my front door. This leads me to believe whatever sort of coordinates they had were pretty spot on. His car was parked well down the block and not in front of our house so that was no give away.
This was five years ago and always struck me as a "Huh"
Clearly they don't need that now because 5g cell towers have gotten precise enough? Also, if that's true then 5g being that precise might still not apply to urban dense areas, where more postprocessing is required to get better location accuracy...
Ofc Yes.
On Mars would any other mobility system today achieve better performance for it's purportedly stated(neigh most ridiculous stated but to be fair difficult engineering) goals i.e. colonization? Also no.
I am surprised after watching this that there is so much of the Boston dynamic stuff man/dog walking out there, given that mobility is so well accomplished. Do you need — to invest — an anthropomorphized man to scale walls and be stable after getting kicked around?? I know one thing here on earth all large scale semi(think agro machines) and almost fully(delivery bots) autonomous look nothing like anthropomorphics or canines.
Maybe I have the dunning Kruger effect, because I am not a robotics engineer, but why is building an anthropomorphic _mobility_ platform so important to solve _pragmatic_ problems?
I'm expecting my first child soon so I am building it for me and my family first but if it solves a problem for me, maybe others will like it!
The basic idea is that you are uploading a curated set of photos you want to share, not your whole camera roll.
You can create one or several family groups that you can share individual photos or albums to. Members of those family groups can view, comment on, like, etc those photos.
You can also generate sharable links for people who don't have an account with a configurable expiry time.
It currently more or less works on the web but I am also working on iOS and Android apps since that is how my extended family would want to interact with it.
I'm not quite ready to launch it to the public but if anybody is interested in trying it out or offering feedback I can privately share it :)
I've been noodling with running a preprocessing step to "tag" pictures and videos with a set of tags with richer spatial and temporal features with off the shelf models and then just let a local AI model pick one based one what might match today's theme.
Are you using any models to make your curation step easier/better UX wise? e.g." Compose all Christmas pictures with Grandma and the kids on vacation" and it would give you a collection to curate from my library