There was one company whose process I really appreciated:
- A take-home test, more of a puzzle-style challenge. - The interviewer does a quick review of your submission. - If they like what they see, you're invited to a follow-up session where you explain your code and make a few simple changes to it.
This approach proves that the work is genuinely yours, shows your thought process, and still gives the opportunity to demonstrate live coding on code you're already familiar with.
This is too important for good communication for me.
But indeed, it's not on the higher end for sound and image quality.
It's not the worse either. Just meh.
These devices should be apps on your phone. Google and Apple will won't open their devices enough for these assistants to useful cause they want to roll their own. So their only hope is to create a stand alone device.. But yeah they're dead in the water.
Quite simple to start, and a nice system to add some scripting and styles without the requirement of bringing in a framework.
IMO, I found those specific example tasks to be better handled by my IDE's refactoring features, though support for that is going to vary by project/language/IDE. I'm still more of a ludite when it comes to LLM based development tools, but the best case I've seen thus far is small first bites out of a big task. Working on an older no-tests code base recently, it's been things like setting up 4-5 tests that I'll expand on into a full test suite. You can't take more than a few "big" bites out of a task before you have 0 context as to what direction the vector soup sloshed in.
So, in terms of carpentry, I don't want an LLM framer who's work I need to build off of, but an LLM millworker handing me the lumber is pretty useful.
In terms of ai assisted programming. I microanage my ai. Give it specific instructions with single steps. Don't really let it build ehoe files by itself as it usually makes a mess of things, bit it's useful when doing predictable changes and marginally faster than doing it manually.