I was testing IPv6 origin support (they don’t support it), and they billed me $2 for a couple of test requests. I was testing at the end of the month.
With other providers, this would have cost only a few cents.
I was testing IPv6 origin support (they don’t support it), and they billed me $2 for a couple of test requests. I was testing at the end of the month.
With other providers, this would have cost only a few cents.
Netcup does oversubscribe/overshare but not sooo much. I have a server there and I don't really observe too much but although I haven't really gotten ways to detect that stealing factor but there are definitely scripts to detect it, maybe I will run it some day but oh well the laziness.
The most overshared vps provider I know is contabo. Literally search anything on reddit,lowendbox, literally anywhere where there are people and they mention about how ~20-30% figure top of my head could be oversubscribed
I am not exactly sure but my point is that when I first saw them, I found them the cheapest option (with their contabo auctions for something at really scale like 96 gb ram or something) but they are literally out of my book as well even as a frugal guy just because of how unstable they are or how much consistent I have seen people struggle about contabo. It's simply unrecommended imho. Netcup's 10x more pleasant from what I see other people's reaction to. People do mention some stealing factor on netcup but overall its really good and that sort of aligns with my experience with them too ig/
One of my client had dedicated servers in contabo and had to move to OVH because of it.
https://github.com/orgs/community/discussions/10539
For countries, if you meaning connecting to VPS, lot of countries have good IPv6 connectivity now. For me both ISPs I use have native v6. This will differ from person to person.
IPv4 shortages didn’t kill it, and I don’t think this will either.
So I can either get a top tier tool when I upgrade this year or I can buy a subpar device, and the power management is going to likely be even worse on Linux.
Some short googling says they have lasers that clear a path for a data carrying beam, but that seems wasteful/infeasible for commercial uses
"Even Earth’s atmosphere interferes with optical communications. Clouds and mist can interrupt a laser. A solution to this is building multiple ground stations, which are telescopes on Earth that receive infrared waves. If it’s cloudy at one station, the waves can be redirected to a different ground station. With more ground stations, the network can be more flexible during bad weather. SCaN is also investigating multiple approaches, like Delay/Disruption Tolerant Networking and satellite arrays to help deal with challenges derived from atmospheric means."
https://www.nasa.gov/technology/space-comms/optical-communic...
Some more info on Optical Communications for Satellites: https://www.kiss.caltech.edu/workshops/optcomm/presentations...
Having a EU based one will be great.
I just tested on Chrome Android via remote inspect using developer tools. It loaded the image even when the image was below the fold.
The failure mode I kept hitting wasn’t just "it makes mistakes", it was drift: it can stay locally plausible while slowly walking away from the real constraints of the repo. The output still sounds confident, so you don’t notice until you run into reality (tests, runtime behaviour, perf, ops, UX).
What ended up working for me was treating chat as where I shape the plan (tradeoffs, invariants, failure modes) and treating the agent as something that does narrow, reviewable diffs against that plan. The human job stays very boring: run it, verify it, and decide what’s actually acceptable. That separation is what made it click for me.
Once I got that loop stable, it stopped being a toy and started being a lever. I’ve shipped real features this way across a few projects (a git like tool for heavy media projects, a ticketing/payment flow with real users, a local-first genealogy tool, and a small CMS/publishing pipeline). The common thread is the same: small diffs, fast verification, and continuously tightening the harness so the agent can’t drift unnoticed.
these are some ticks I use now.
1. Write a generic prompts about the project and software versions and keep it in the folder. (I think this getting pushed as SKIILS.md now)
2. In the prompt add instructions to add comments on changes, since our main job is to validate and fix any issues, it makes it easier.
3. Find the best model for the specific workflow. For example, these days I find that Gemini Pro is good for HTML UI stuff, while Claude Sonnet is good for python code. (This is why subagents are getting popluar)