Thr author could have skipped a step and formatted using tags instead of markdown and published the html directly, with zero need for any generator. Using scp and html, like it was 1998.
I'm a Dev Manager in the Shopify Fulfillment Network. If anyone is interested in talking about the culture or what working at Shopify is like, feel free to hit me up on twitter @tmarthal or email tom.marthaler@shopify
Have you considered that the original poster's experience is actually the norm and that your experience is the one that is the anomaly? I was 1 for 2 in organizations with shitty leadership, and the organization that was run properly had zero open headcount. Everywhere people are hiring into is not one of the "good ones".
Check out the old-fart tool, 85% of the company has been at Amazon for 3 years or less. Do you think that if the normal/average organization/team was a great place to be, that there would be so much attrition?
LEO orbits have speeds around 7.8 km/s (rounding up to ~8000m/s for quick calculations) - this avoidance detection is saying that the two satellites both traveling at 8000m/s would be in the same 50m box at the same second.
A quick calculation shows that the collision avoidance is operating at least the millisecond level to predict this collision (50m/(8000m/s) ~.005 seconds.
One thing someone once mentioned to me is that space is big and things travel fast. It's hard to believe that the two satellites (most likely each <1m in diameter) came "close" to colliding, when a half second later they would be 8000 meters apart.
By that logic, surely every interview practice is justified? And the less related to the job, the better?
I could ask a programmer to build a wooden boat, become a proficient opera singer, or complete four marathons in a year.
In a sense, they are. As long as the interview practice is well understood by the applicants, it only leads to filtering the applicants that cannot pursue the requirements for that practice. Granted, the FAANG interview process and filtering only works because the SWE jobs are desirable enough.
I understand that the required practices and filtering also lead to a certain type of applicant. This is the main fault of the system (the lack of diversity potential).
However, everything (and I mean everything) at Amazon depends on the team (and organization) that you land in. Some organizations do not have senior technical leadership; service ownership is handed off to teams without long tenured Amazon engineers so they do not get exposed to the types of tools to use (nor do these teams get time to discover, learn, and on-board to the tools that do exist). This is how an engineer can have the experience written about in the article.
The article is anecdotal, and definitely not the norm for the "majority of engineers"
You can usually count on having native libraries for a given activity for Java, so you can just use a JVM-based language (does Lambda support that? I bet it does.)
The POM packaging and jar based deployment seemed to make the dependencies work.
1) its ridiculous to also "fix" the zestimate history. Put the old one in the history, and explain near the chart what rejiggered.
2) The market in my immediate area, as evidence by the comps from zillow and the sales prices in the newspaper, has only increased. The conservative comps are near the old zestimate - this is a suburban area. The wild ones are higher than my private estimate.
What happens is that the model uses trends to extrapolate from each "real" data point (in this case, a house sale in your neighborhood). The problem is, and what Kalman Filters help manage, is the uncertainty propagation between each house sale. When it has been a long time from when a sale has occurred, it is unclear what the real/actual price is of a home. This means that on the estimated price the error bounds are large, and zestimate still just reports the mean value of this huge uncertainty.
What then happens is that a house is sold in your area, a new data point is recorded, and the filter re-adjusts itself and collapses its uncertainty/error bounds in the time of that measurement around the measurement. And you get correction. This is why the "old zestimate" is updated.