It's especially surprising if the 'audio playing' icon is there, since that should be coming from the browser itself.
It's especially surprising if the 'audio playing' icon is there, since that should be coming from the browser itself.
Whenever I'm switching between tasks (thinking vs reading vs writing) I'd either turn the sound off or on, given I needed more or less attention at the moment. Minor problem with that was that sometimes unexpectedly I'd stick with the new task longer than expected, start to get bored, but w/e background sound I had on didn't match the task, so I'd look for something else... Overall a bit annoying for some groups of tasks.
I'm experimenting with mixing music with podcasts with extra noise and turning it on and off, but I also made https://stimulantnoi.se/ (with extra reading on psychological basis of the design and link to open source standalone desktop app on https://incentiveassemblage.substack.com/p/why-is-nobody-ser...). It allows for mixing (including uploading additional) sounds into sets and binds switching between those whole sets to media keys for quick access.
Terms you want to check for more detailed info are 'liquid intelligence' and 'crystalized intelligence', but you basically nailed it.
I wish I was hiring, if that's what you're asking ;) Otherwise, if you have any ideas for processing formulas (even just for reading them out, but any extra steps towards expressing what they mean - ' 'sum divided by count' is 'mean'/'average' value ' being the most simple example I can think of) I'd love to hear them. Novel ideas in technical papers are often expressed with formulas which aren't that complicated conceptually, but are critical to understanding the whole paper and that was another piece I was having very mixed results with.
Two approaches I had most, but not full, success with are: 1) converting to image with pdf2image, then reading with pytesseract 2) throwing whole pdfs into pypdf 3) experimental multimodal models
You can get more if you make content more predictable (if you know this part is going to be pure text just put it in pypdf, if you know this is going to be a math formula explain the field to the model and have it read it back for high accessibility needs audience) the better it will go, but it continues to be a nightmare and a bottleneck.
For the pure fun of breaking the narrative I found original article, it's here: https://bmjpublichealth.bmj.com/content/2/1/e001000
Time of day (or time after waking up per subject) when tests were administered has not been controlled. Cognitive abilities are mediated by wakefulness (not to mention, related for most people, digestive processes) cycle.
If '"Night owls" smarter than morning people' sounds more plausible than 'time since waking up and last meal predictive of cognitive performance' it's time to get one's identity checked. And I can't imagine 'journalists' from thrash like Sky (Guardian this time) not knowing that, which brings me to the final point: what is this link doing here?
Time of day (or time after waking up per subject) when tests were administered has not been controlled. Cognitive abilities are mediated by wakefulness (not to mention, related for most people, digestive processes) cycle.
If '"Night owls" smarter than morning people' sounds more plausible than 'time since waking up and last meal predictive of cognitive performance' it's time to get one's identity checked. And I can't imagine 'journalists' from thrash like Sky not knowing that, which brings me to the final point: what is this link doing here?
There's a lot of similar apps, people seem to use and like all of them, I'm just curious how it looked from the inside of this one.
Ideally, I'd like to allow sharing and storing of presets, but it was simply out of scope for the PoC - the functionality is there in the desktop version btw, but it on the other hand asks users to download an unknown .exe and then share mp3 and json files with each other, putting us firmly in the mid-90s'.