If the Model M or F are appealing to you, at this price point you should also consider some more modern production keyboards or even building your own from a kit + switches + keycaps. If you're after the heavy & tactile keypresses of buckling springs a board with Cherry MX Clear (common) or Green (less common) will probably satisfy you - I can highly recommend any of Leopold's boards with Clears in them.
That said, I am thoroughly disappointed the Federal gov't and much of the media coverage. They have done nothing but make the situation worse. I think it is intentional (I assume some political end game), but their actions are fueling even more outlandish conspiracy theories.
The most insane was that all layers of government did nothing to stop the noise (truck horns), but it ended when a 21 year old who simply filed a court injunction and the protesters complied.
https://www.cbc.ca/news/politics/injunction-ottawa-granted-1...
I've watched the Toronto Police Service play their A game through this entire debacle. They shut down the protests hard and were clearly visible throughout the city with heavy trucks and busses to block roads and maintain control of the situation.
https://www.cp24.com/video?clipId=2376560
The idea that Justin Trudeau needs martial law to deal with parked trucks is outrageous. This isn't an insurrection (reference to an MOU was removed from their website and I agree with the assertion that it was a poorly thought out idea, not a threat), there is no violence, and no obvious danger. The last person to use martial law was Trudeau's father (Pierre) for an actual terrorist attack and kidnapping (the diplomat was later murdered). Get some proper police on the job and drop mandates for ineffective measures and let's move on with our lives.
The key phrases are "imposition of direct military control of normal civil functions" and "suspension of civil law by a government".
The Canadian Emergencies Act, which was invoked by the Liberal government today, specifically states the following: "For greater certainty, nothing in this Act derogates from the authority of the Government of Canada to deal with emergencies on any property, territory or area in respect of which the Parliament of Canada has jurisdiction" [2].
I'd do a deeper reading but I'm a bit lazy, but my understanding is that the EA does not allow, in any way, a shift in governance that could be described as "martial law" - where the military is in control of civil functions and can create or remove laws as military leadership desires. Even with the EA invoked, the federal government still controls the Canadian military (but can be assisted in enforcing civil law _by_ the military).
I'm no fan of Trudeau either, but we should seek to be precise when discussing hot situations like this. People can get very inflamed off of internet posts and the idea that we're under "martial law" is riling people up.
[1] https://en.wikipedia.org/wiki/Martial_law
[2] https://laws-lois.justice.gc.ca/eng/acts/e-4.5/page-1.html
you probably are aware but Xilinx themselves is attempting this with their versal aie boards which (in spirit) similar to GPUs, in that they group together a programmable fabric of programmable SIMD type compute cores.
https://www.xilinx.com/support/documentation/architecture-ma...
i have not played with one but i've been told (by a xilinx person, so grain of salt) the flow from high-level representation to that arch is more open
[0] https://www.xilinx.com/products/design-tools/vitis/vitis-pla...
Broadly speaking, FPGA-based ML model accelerators are in an interesting space right now, where they aren't particularly compelling from a performance (or perf / Watt, perf / $, etc.) perspective. If you just need performance, then a GPU or ASIC-based accelerator will serve you better - the GPU will be easier to program, and ASIC-based accelerators from the various startups are performing pretty well. Where an FPGA accelerator makes a lot of sense is if you otherwise need an FPGA anyways, or the other benefits of FPGAs (e.g. lots of easily-controlled IO) - but then you're just back to square 1 of "there's some cases where an FPGA makes sense and many where it doesn't". Besides that, a few niche cases where a mid-range FPGA might beat a mid-range GPU on perf / Watt or whatever metric is important for you.
Again, opinions are my own and all that. As someone in the space, I am very much hoping that someone - whether an ASIC startup or Xilinx / Intel come up with a "better" (performant, cheaper, easier to use, etc.) solution than GPUs for ML applications. If the winner ends up being FPGAs, that would be really really cool! Just at the moment it's not too compelling, and I'm trying to be realistic.
All that said, FPGAs and their related supports (software, boards, etc.) are an $Xb / Y market - nothing to shake a stick at, and there are many cases where an FPGA makes sense. Just doesn't currently make sense for every dev to buy an FPGA card to drop in their desktop to play with.
Nobody has come up with a good answer yet. Developing for an FPGA still requires domain-specific knowledge, and because place & route (the "compile" for an FPGA) is a couple of intertwined NP-hard problems development cycles are necessarily long. Small designs might take an hour to compile, the largest designs deployed these days ~24H.
All this to say is that while they are neat, nobody has found the magic bullet use case that will make everyone want one enough to put up with the pain of developing for them (a la machine learning for GPUs). Simultaneously, nobody has found the magic bullet to make developing for them any easier, whether by reducing the knowledge required or improving the tooling.
Effort has been made in places like High-Level Synthesis (HLS, compiling C/C++ code down to an FPGA), open-source tooling, and (everyone's favorite) simulation, but they all still kinda suck compared to developing software, or even the ecosystem that exists around GPUs these days. You'll often hear FPGA people saying stuff like "just simulate your design during development, compiling to hardware is just a last step to check everything works" - but simulation still takes a long time (large designs can take hours) and tracking down a bug in waveforms is akin to Neo learning to see the Matrix.
I have some questions though:
1. do they teach you about sanitization and airlocks, or is it simply "toss some yeast in this bottle"
2. I'm guessing no aging? it would be awesome if they allowed you to pop some in a barrel for aging
3. (not really a question) but wow, $20/gallon is still kind of spendy!
Never seen a shop do ageing, so the wine will be noticeably "young". My parents like dryer and sharper white wines anyways (Pinot Grigio, Riesling, etc.) so it doesn't bother them. Also note that due to taxes and such, the cheapest wine you'll find commercially is C$11 a bottle, so even at C$20 / gallon you're getting a great deal if you like the resulting wines.
Personally, I quite like the wines my folks get through these shops - properly chilled they make a wonderfully refreshing beverage in the summer, and we'll often drink a few bottles on the back deck together when I go to see them.
Alas, this is BS.
https://www.youtube.com/watch?v=pRjsu9-Vznc
Thanks for the information though, I'll do some A/B testing to see if it makes any difference to me.
I think the 70s are also around the same time that a lot of familiar genres started to emerge, while music from before then is often dismissed as "oldies" or saved for special occasions - e.g. old crunchy recordings of Christmas songs.
OP claims to have done 400+ LC problems over a couple of months. Let me say it out loud here: this is simply crazy. It strikes me as an attempt to not learn how to tackle these challenges, but to actually brute-force through them. To anyone preparing for an interview: don't do this! Grab a book like Elements of Programming Interviews (EPI), maybe follow some online courses on programming puzzle patterns, and then start grinding LC. Maybe interleave grinding with learning? Your end goal should be to develop deep understanding of what you are doing, not memorise the solutions.
Also, while going through LeetCode, it is very important to realise that the problem classification there is a bit wild at times. Don't stress that you cannot solve a medium sometimes as they are mislabeled. I did mediums that could be hards, hards that could be mediums, and hards that were just impossibly hard. Typically a very hard problem is not something you should expect in an interview setting as most interviers* don't expect you to implement KMP on the spot. Doesn't hurt to know it and impress the interviewer with knowledge, but if you think memorising KMP is the way, you're mistaken.
* - there is still luck involved and you can have a crazy interviewer. It can happen, so just accept it and move on. Don't treat it as a personal defeat.
Source: I grinded and I had offers from most of FAANG letters.