Using this beast as intellisense is just one application (called "Copilot") and it has all these annoyance factors sometimes. But I am not talking about that.
To me, this is like we found a way to transform iron to gold with low energy usage, and people are complaining that gold is not that useful. And most chemists not even hearing about the news. I'm constantly amazed by this, every single day, as I read threads like this one.
I'll admit I haven't played with Copilot yet (since I don't think my employer would be happy for me to send off proprietary code to third-party servers, so I've effectively self-banned myself from using it at work*), but I'd feel that for anything non-trivial like your example of complex SQL queries I'd be reluctant to use the generated output without extra scrutiny (essentially a very fine-toothed code review, which is exhausting).
My opinion will probably change as the tools become more mature, but for now I'm treating them as toys primarily which limits the excitement.
Something like TLDR is less risky as it's not producing code, just summarising it, but I'd still feel wary to trust it since it's such a new field. Maybe this speaks more to my own paranoia than anything else!
EDIT: *and on this topic while I'm here: I'm actually a bit confused (and honestly... jealous?) on the topic of privacy for these kinds of external models. Is everyone who's using Copilot and tools like this working at non-Bigcos? Or just ignoring that it's sending off your source code to a third party server? Or am I missing something here?
It'd be against the rules to use external pastebins or other online tools that send off private source code to a server, so I'm kind of shocked how many devs are talking about how they use AI tools like this at work... is this just a case of "ask for forgiveness, not permission"?
You can define a base config for your repo, and then inherit two sub-configs from it each with different runners and mutually exclusive selection for tests (by suffix, folder, or any convention that works for you).
Glad that this one has the ability to be configured more granually - that'll come in handy for migrating gradually.
The good news is that we have never been shy about making breaking changes and we are working on cleaning the house and making many legacy components optional, all while bringing the existing community with us.
As for mocking, you don’t have to rely on Jest’s inbuilt mocking libraries and you can use the ones you like better.
If you care more about raw performance and ES module support and less about isolation, check out the jest-light-runner: https://github.com/nicolo-ribaudo/jest-light-runner
We also mentioned it in our Jest 28 blog post: https://jestjs.io/blog/2022/04/25/jest-28
I’m wondering if it’s time to consider taking a big step and making this runner the default, and give people the optionality of isolation. However, in my past experience at large companies (both first-hand and second-hand experience), the lack of isolation in tests led to major reliability problems with testing infrastructure. I’m still feeling like it’s the better default today, but maybe we should have a serious discussion about Jest’s next set of defaults.
Do you know if there's a way to use that runner per-file or per-project within jest? Seems like that's the only way it'd be possible to smoothly migrate in a large codebase without rewriting all tests at once.
It's apparently the "Dell PowerEdge R740xd2 rack server".
https://docs.microsoft.com/en-us/microsoft-edge/web-platform...
Do you know if there's a way to see that XML list they mention anywhere publically? I can't find a link to it on that page.
I guess it should be possible to spin up IE11 in a VM on macOS and inspect the network, but would be nice to take a look and see which sites are on there.