It’s almost like reliving the late 1990s with far more ads, more vanilla websites, and worse search engine quality.
It’s almost like reliving the late 1990s with far more ads, more vanilla websites, and worse search engine quality.
I do have hope that this delay goes down as I keep implementing things.
• Position is where you are at any moment. If you're not moving, your position doesn't change.
• Velocity is how quickly your position changes. If you are doing 30 MPH on a perfectly straight road with no stops and starts, you may not even notice you're moving until you look out the window.
• Acceleration is how quickly your velocity changes. It's the force that makes you feel like you are being pushed back into your seat, for example when your velocity increases from 30 MPH to 60 MPH.
• Jerk is how quickly the acceleration changes. It's the force that makes your head snap back against the headrest. A good driver will change acceleration slowly to reduce this effect. If there is too much jerk, it may mean that your driver is being a jerk.
If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.
Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:
>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.
So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.
[1] https://www.jetbrains.com/help/ai-assistant/disable-ai-assis...
This puts me off a bit to finally try local models. Anyone know what kind data is collected in those rare instances of cloud usage?
There’s nothing else I really ever end up going back to look at again.
Instead the way to get past this foolishness is to ask open-ended questions expecting precise answers where the questions are themselves not precise. This presents too much variance. For example: Talk to me about the methods of your favorite Node code library. In that case the candidate has to pick, on the fly, from any of the libraries that ship with Node and immediately start talking about what like about certain methods and how they would use them.
Another example: Tell me about full duplex communication on the web. AI will immediately point you to WebSockets, but it won't explain what full duplex means in 3 words or less or why WebSockets are full duplex and other solutions aren't.
Another example: Given a single page application what things would you recommend to get full state restoration within half a second of page request? AI barfs on that one. It starts explain what state restoration is, which doesn't answer the question.
In other words AI is not a problem so long as you don't suck as an interviewer.
I used to ask a simpler (for AI) question. The candidate reads out the first sentence. By this time I'd have already established that the candidate is not genuine. Our interview process let's ride out the interview as a courtesy and to also try to extract something out of the candidate that the company could use.
Anyway once they read out the first sentence from the AI with utmost sincerity, I'd follow-up with deeper questions into the topic at hand. 99% failed to answer the second question well. The ones who let me ask the third and fourth questions are devs who still have their original thinking hats in place but just use AI out of nervousness or who generally don't interview well. Those we'd explore further and suggest for lesser roles/alternate streams etc. This all my experience and others MMV.
2. Giving some structure to my opensource project ideas. I had a good time getting over my analysis-paralysis while writing them down.