Absolutely zero regrets, the cumulative savings across everything that is faster and the massive step up in DX is worthy of the hype.
Absolutely zero regrets, the cumulative savings across everything that is faster and the massive step up in DX is worthy of the hype.
Apparently there are already 5k TX electric taxis out there, which is a good start:
https://levc.com/levc-celebrates-sale-of-5000th-tx-electric-...
In theory when they sort that out Rust should be able to turn on restrict where it applies globally "for free".
[0]: https://github.com/rust-lang/rust/pull/82834
The JS runner in Postman felt a little limited I couldnt for example figure out how to fetch a document.cookie from a response. It would be awesome if the JS runner can match the usefulness of the one from Postman in addition to being extendable like the rest of Milkman seems to be.
One thing I wish there was and maybe this project might be extensible towards is a JMS / better ActiveMQ / JMS client.
It had so many compatibility issues with real JS that it was kind of untenable to use it with scripts designed for other JS environments: https://jaxenter.com/nashorn-javascript-engine-deprecated-14...
The only real way to deal with JS now is Rhino (which still exists!), or some custom binding to V8 or something via JNI (yikes).
Our work uses Nashorn a lot for scripting and this is a big blocker for us to move to newer Java versions.
This tool doesn't help you with any of that. It seems to be a glorified awk script. My concern is that helping the user with the easiest part of anonymizing data stands to encourage the user to go full steam ahead without slowing down to stop and think very carefully about what they're doing.
We absolutely agree this tool only solves the easiest part of anonymising data, and internally we rely on our team of data scientists to do the difficult parts. This tool is absolutely not up to the task of anonymising a dataset in such a way as to make it able to be made public. For us, it's about risk management vs effort: from a security perspective there are scenarios where we can use samples of data that have gone through this process and decrease the risk of holding data internally in multiple places substantially without significant effort. If we were to go onto to make any of these datasets ultimately public, we'd be looking for a better suited tool (eg. ARX [2]).
Regarding one part of your comment:
> My concern is that helping the user with the easiest part of anonymizing data stands to encourage the user to go full steam ahead without slowing down to stop and think very carefully about what they're doing.
We're going to try to add something to the README addressing this exact question from both of you as it's one I anticipate we're going to get asked a lot - or one that carries risk if it's not made obvious form the outset - so thanks for the constructive line of questioning as it really will ultimately help us and people who choose to use this tool make a decision that's right for them and their use-cases.
Original SGML was actually closer to markdown. It had various options to shorten and simplify the syntax, making it easy to write and edit by hand, while still having an unambiguous structure.
The verbose and explicit structure of xhtml makes it easier to process by tools, but more tedious for humans.
It’s kind of a huge deal that I can give a Markdown file of plain text content to somebody non-technical and they aren’t overwhelmed by it in raw form.
HTML fails that same test.