Enable IPv6 on a TP-Link Omada router (ER7212PC) and all internal services are exposed to the outside world as there is no default IPv6 deny-all rule and no IPv6 firewall. I get why some people are nervous.
- github copilot PR reviews are subpar compared to what I've seen from other services: at least for our PRs they tend to be mostly an (expensive) grammar/spell-check
- given that it's github native you'd wish for a good integration with the platform but then when your org is behind a (github) IP whitelist things seem to break often
- network firewall for the agent doesn't seem to work properly
raised tickets for all these but given how well it works when it does, I might as well just migrate to another service
RELIABILITY AND PERFORMANCE ENGINEER: https://jobs.ashbyhq.com/feldera/709c14e4-1fa9-46b4-9ff8-078...
- Strong background in systems engineering, performance testing, or site reliability engineering.
- Fluency in Python and Linux fundamentals. Rust experience is a plus.
- Experience with distributed systems and database concepts (consistency, fault tolerance, transactions).
- Experience with CI/CD/Infrastructure as Code: GitHub Actions, Docker, Kubernetes.
- Hands-on experience running large-scale and long-running workloads, preferably in a cloud-native environment.
- Curiosity, rigor, and the ability to design experiments that simulate messy real-world conditions.
SOLUTION ENGINEER: https://jobs.ashbyhq.com/feldera/544aff74-263f-4749-a4d0-af0... - 5+ years experience in solution architect, customer engineering or solution engineering roles.
- Strong background in distributed systems, databases, cloud infrastructure, and modern data platforms.
- Experience with data-intensive systems in production (e.g., Kafka, Delta Lake, Iceberg, Kubernetes, monitoring/observability stacks).
- Exceptional debugging and problem-solving skills, especially in customer-facing contexts.
- Excellent communication skills, both for customer-facing and internal interactions.
- Ability to write and maintain high-quality technical docs and playbooks.
https://jobs.ashbyhq.com/felderaFeel free to email with your resume gz @ domain, put HN in subject.
Although one might consider it surprising that OS developers have not updated security models for this new reality, I would argue that no one wants to throw away their models due to 1) backward compatibility; and 2) the amount of work it would take to develop and market an entirely new operating system that is fully network aware.
Yes we have containers and VMs, but these are just kludges on top of existing systems to handle networks and tainted (in the Perl sense) data.
Andrew Tanenbaum developed the Amoeba operating system with those requirements in mind almost 40 years ago. There were plenty of others that did propose similar systems in the systems research community. It's not that we don't know how to do it just that the OS's that became mainstream didn't want to/need to/consider those requirements necessary/<insert any other potential reason I forgot>.