- Zero external dependencies
- Works in Node.js and all modern browsers
- Tiny: 2kb core bundle (gzipped)
- Zero external dependencies
- Works in Node.js and all modern browsers
- Tiny: 2kb core bundle (gzipped)
The choice of C++ is bold.
Despite the security concerns often highlighted, modern C++ with smart pointers, and RAII patterns can be just as safe as Rust when done right. Vaev’s security model should focus on process isolation, sandboxing techniques, and leveraging modern C++ features to minimize vulnerabilities.
Super excited to see such raw innovation and courage in tackling a colossal task often monopolized by juggernauts like Chromium.
All the automation in the world with useless without a human guide to either transform production into a useful product, or useful knowledge to heed. That's why this act of trying to remove human labor is asinine. Even skilled human can't always get the right readings, so expecting a robot to do it all at this stage is just selling snake oil.
The current human-in-the-loop model exists largely because our technology hasn't been good enough yet, not because there's something inherently special about human judgment in this context. Weather prediction is fundamentally a pattern recognition problem. Pattern analysis at scale is exactly what computers do better than us.
Perhaps someone could apply to YC with this idea. There is one YC startup doing this already: https://www.ycombinator.com/companies/atmo
Anyone can suggest a law. The stage this one failed in is explicitly meant to gauge if there would be any reasonable support to get it passed. The answer was a resounding No.
Even if it proceeded, it would have quite likely lead to a popular referendum due to Switzerland's system of direct democracy. I'd say not many places in the world have as strong defenses against laws like this as Switzerland.
Of course, it doesn't mean that it's not important to highlight when such ideas do crop up, and especially naming and shaming who/where they come from. I'm glad Proton et al. spoke out.
The "super tutor" stuff that is always mentioned as the utopian outcome (along with "cures for cancer") is, unsurprisingly, never something being worked on by the person or lab quoting these examples.
I guess anything goes in B2B settings, but there is a valid reason to be cautious about these advances when it comes to mass-market consumer-facing applications.
The "super tutor" isn't some distant fantasy - millions already use ChatGPT, Claude and similar tools daily for personalized learning. They're imperfect but genuinely helpful for programming, languages, math, and countless other topics.
Look at what happened with YouTube: millions of people transformed themselves into programmers, musicians, mechanics, and countless other professions through free video tutorials. Khan Academy revolutionized math education. Coursera and edX brought university courses to anyone with internet. This wasn't utopian thinking - it was practical technology solving real educational problems at scale.
What's different now is that LLMs enable the missing piece: personalization. The one-on-one adaptive experience that was previously limited to those who could afford human tutors at $50-100/hour is now available to anyone at negligible marginal cost.
Your skepticism about cancer applications too ignores the technological trajectory we've been on for decades. Just as YouTube and online platforms democratized education, technology has been steadily dismantling bottlenecks in medical research.
The human genome project initially cost $3 billion and took 13 years. Today you can sequence a genome for under $1,000 in days. This wasn't utopian thinking; it was technological progress following its natural course.
Think what LLMs will do here.
But guess what? Now, finally, we can co-opt LLMs for things humans fumble: e.g., real-time conversational tutoring, adaptive negotiation agents, or even scalable personal 'bullshit detectors' as countermeasures. I hope conversation doesn't go into AI-Safeteyism and restricting LLMs and more about building stuff. Let's build, not block.
It's worth being specific: the National Weather Service operates some of the most robust automation and radar ingest pipelines on Earth, but the final go/no-go warning call is almost always human—often a single overnight forecaster on a console, monitoring a swath of counties. Automation (e.g., Warn-on-Forecast guidance) can surface threats, but the NWS intentionally doesn't have an 'auto-warn' button for tornadoes, because of the asymmetry of false positives (blow credibility, cost lives in the long run).
Budget cuts reduce redundancy and experience in those overnight shifts. When you have only one person monitoring instead of a team of two or three, you get decision fatigue and coverage holes, especially during clustered, multi-cell outbreaks. We've seen near-misses in the past, and every pro-meteorologist I know says they're playing defense against process errors, not just technology failures.
Before we point fingers or blame 'technology/automation' shortfalls, let's quantify the concrete bottleneck: skilled human decision-makers are the limiting reagent; machine learning warning aids are still years away from majority trust.
Trump campaigned on cutting government services.
Everyone is okay with cutting a public service (at the expense of others) until they need that particular service
---
To clarify, I'm not cheering on this disaster or hoping that those who voted for Trump "get what they deserve"
ANY critical pipeline that can be broken by one missing seat is overdue for technical reinforcement
I'm not too worried about the human factor being replaced as a whole here. Even with AI, someone needs to interpret the output and make sure the the prediction models actually work.