Haven't heard from them since.
I'll take my chances in the open source world. It's a shame that the companies that created the software aren't getting paid, truly. But don't make it so obnoxious to reward you.
Haven't heard from them since.
I'll take my chances in the open source world. It's a shame that the companies that created the software aren't getting paid, truly. But don't make it so obnoxious to reward you.
They do on their paid plans. https://proton.me/support/email-forwarding
The downside is that downloading messages is fairly slow when you have 10-20k messages in your inbox. And the webmail is fairly primitive.
I never tried Fastmail.
Short cycle length certainly makes sense to be correlated with pathogens. The lousy LG "TurboWash" only takes 28 minutes to do a full load of laundry but certainly doesn't get very much clean in that time.
I have to admit it was surprising that textiles have been identified as the source of hospital acquired infections. You'd think that even if the laundering didn't eliminate pathogens, it would greatly reduce them and make any clusters more diffuse.
In any case in my rural homestead region there are mainly three classes
1) .gov pensioners 2) successful professionals 3) inherited property
The key to government pensions here I think is that they get benefits early enough in life that they are young enough to build and live a homestead life. At age 65+ you might be able to maintain an established property but buying something affordable (read: rough or vacant land) out and getting it up and running would be pretty rough for most at that age.
I saw Riverrock over Christmas when it was 95% complete, and it does look really cool. Similar in a lot of ways, especially the living room, but quite a different floor plan. I hope the doors are a bit wider than the Louis Penfield house on the same site; even folks of normal width have to rotate sideways. Toilet in a narrow alcove, narrow cushions on the furniture, etc. Absolute commitment to design integrity, not always comfortable. Still a fascinating place to stay.
I did have some very excellent university classes (including ones that were so good that I audited them without receiving credit), but I also had a lot that were positively abysmal, taught by professors who were experiencing severe mental health issues (one who'd had a stroke and could no longer comprehend the material, another who was going through a mental break and stopped teaching us altogether, etc.) or extremely stressed grad students who were not fluent in English and spent class time trying to catch up with their PhD workload.
My best university-level education actually came after I graduated and got a job working in a lab at my university. During that time, I worked closely with the professor and grad students, and it was such an amazing learning opportunity that I will never forget for the rest of my life — sadly cut short by the 2008 financial crisis.
> In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text."
What does this even mean? Let's veto the word "reasoning" here and reflect.
The LLM produces a series of outputs. Each output changes the likelihood of the next output. So it's transitioning in a very large state space.
Assume there exists some states that the activations could be in that would cause the correct output to be generated. Assume also that there is some possible path of text connecting the original input to such a success state.
The reinforcement learning objective reinforces pathways that were successful during training. If there's some intermediate calculation to do or 'inference' that could be drawn, writing out a new text that makes that explicit might be a useful step. The reinforcement learning objective is supposed to encourage the model to learn such patterns.
So what does "sophisticated simulators of reasoning-like text" even mean here? The mechanism that the model uses to transition towards the answer is to generate intermediate text. What's the complaint here?
It makes the same sort of sense to talk about the model "reasoning" as it does to talk about AlphaZero "valuing material" or "fighting for the center". These are shorthands for describing patterns of behaviour, but of course the model doesn't "value" anything in a strictly human way. The chess engine usually doesn't see a full line to victory, but in the games it's played, paths which transition through states with material advantage are often good -- although it depends on other factors.
So of course the chain-of-thought transition process is brittle, and it's brittle in ways that don't match human mistakes. What does it prove that there are counter-examples with irrelevant text interposed that cause the model to produce the wrong output? It shows nothing --- it's a probabilistic process. Of course some different inputs lead to different paths being taken, which may be less successful.
Even for someone who kinda understands how the models are trained, it's surprising to me that they struggle when the symbols change. One thing computers are traditionally very good at is symbolic logic. Graph bijection. Stuff like that. So it's worrisome when they fail at it. Even in this research model which is much, much smaller than current or even older models.