Have they seen any vehicle newer than, I dunno... 2010? Everything is a nondescript blob car these days.
But I think the point still stands. WealthSimple is probably not perceived by the median customer as a traditional bank. So people using it is a counter-example to GGP's point that people won't use "startup" banks.
Vanguard asset allocation ETFs are at like $1.3T [2]. 4 Of Canada's Big banks appear to add up to just over 2T Assets under management based on what Google just gave me as summary. So while I think this is a great outcome for a startup (even with Power backing them), to me it seems in a similar space as the above article that we're still talking a relatively small market share, and likely still closer to early adopter status.
[1] - https://en.wikipedia.org/wiki/Wealthsimple#:~:text=As%20of%2... [2] - https://www.vanguard.ca/en/product/investment-capabilities/a...
I always recommend the book The Mom Test to would-be entrepreneurs. It goes into more detail on why asking people if they will buy something is worthless (as you mentioned), and how you can ask much better questions to find and validate problems worth solving.
I'd say in addition to entrepreneurs, it's an important book for product teams / product engineers to understand what the Mom Test teaches, and tune the filter on asking the right questions to get the highest signal, and ensure the solution closely matches the value prop for the customer. Then sales and marketing get a whole lot easier when you've asked the right questions and solved the right problems.
I doubt that most script kiddies are filtering out potential honeypots/things like this from their tools.
If you are specifically getting targeted, there might be a slight delay by having the adversary try and exploit the honeypot ports, but if you're running a vulnerable service you still get exploited.
Also if you're a vendor, when prospective customers security teams scan you, you'll have some very annoying security questionnaires to answer.
Can wireguard be used for a multi-wan setup/speed aggregation?
In fractional reserve banking, money that is loaned out is accounted for as liabilities. These liabilities subtract from the overall balance stored (reserved) at the bank. The bank is not printing money new money, no matter how many times this idea gets repeated by people who are, ironically, pumping crypto coins that were printed out of thin air.
I think it’s incredible that cryptocurrencies were literally manifested out of bits, but the same people try to criticize banks for doing this same thing (which they don’t).
To just expand a bit, I believe some of the confusion around printing of money comes from the way some economics reports are built. As a micro example, Assume a 10% required reserve, If Alice deposits $100 and the bank lends $90 to Bob. Alice ($100 deposits) + Bob ($90 cash) think they have $190 in total.
This is mainly useful for economists to understand, study, and report on. However, when the reports get distributed to the public, it looks like the banks printed their own money, as we now see $190 on the report when there is only $100 of cash in our example system.
Whether the system should work on a fractional reserve is it's own debate, but we need to know what it is to debate the merits and risks of the system.
But some of the buggiest stuff I've dealt with were in codebases that had full coverage. Because none of the tests were designed to test the original intent of the designed code.
In another view, this might just be a fancy way of doing snapshot testing, use AI to generate all the inputs to produce a robust snapshot, but realize the output isn't unit tests, it's snapshots that report changes in outputs that devs will just rubber stamp.
30 years ago or so I worked at a tiny networking company where several coworkers came from a small company (call it C) that made AppleTalk routers. They recounted being puzzled that their competitor (company S) had a reputation for having a rock-solid product, but when they got it into the lab they found their competitor's product crashed maybe 10 times more often than their own.
It turned out that the competing device could reboot faster than the end-to-end connection timeout in the higher-level protocol, so in practice failures were invisible. Their router, on the other hand, took long enough to reboot that your print job or file server copy would fail. It was as simple as that, and in practice the other product was rock-solid and theirs wasn't.
(This is a fairly accurate summary of what I was told, but there's a chance my coworkers were totally wrong. The conclusion still stands, I think - fast restarts can save your ass.)
Each running process had a backup on another blade in the chassis. All internal state was replicated. And the process was written in a crash only fashion, anything unexpected happened and the process would just minicore and exit.
One day I think I noticed that we had over a hundred thousand crashes in the previous 24 hours, but no one complained and we just sent over the minicores to the devs and got them fixed. In theory some users would be impacted that were triggering the crashes, their devices might have a glitch and need to re-associate with the network, but the crashes caused no widespread impacts in that case.
To this day I'm a fan of crash only software as a philosophy, even though I haven't had the opportunity to implement it in the software I work on.
> It emits an event, then immediately returns a response — meaning it always reports success (201), regardless of whether the downstream email handler succeeds or fails.
It should be understood that after Lambda returns a response the MicroVM is suspending, interrupting your background HTTP request. There is zero guarantee that the request would succeed.
1: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-...