https://joeldare.com/trying-to-stop-procrastination-with-my-...
I am starting to collect too many of them though. I kinda like the idea of ops text-file because it is renewed from day to day. I'm still not quite sure how to deal with the items I know I need to get to eventually but that I won't get to today. I'm also not sure how to deal with the pile growing continually.
I have noticed that thermal notes fade relatively quickly. When they do that I have to think about weather I want to reprint them or just throw them out.
edit: And yeah, this is all anecdotal for me. No clue how much nicotine you actually intake via these methods.
Towards the end of that, there started to be hints of legislation restricting the sale of juices, which made things a bit more complicated for consumers.
Then Juuls became popular, featuring higher nicotine content and almost invisible vapor, and nothing was ever the same.
I was wondering what TTS voices you use? I've heard from other blind people that they tend to prefer the classic, robotic voices rather than modern ML-enhanced voices. Is that true in your experience, too?
Sounds like the robotic voice is more important than we give it credit for, though - from the article's "Do You Really Understand What It’s Saying?" section:
> Unlike human speech, a screen reader’s synthetic voice reads a word in the same way every time. This makes it possible to get used to how it speaks. With years of practice, comprehension becomes automatic. This is just like learning a new language.
When I listened to the voice sample in that section of the article, it sounds very choppy and almost like every phoneme isn't captured. Now, maybe they (the phonemes) are all captured, or maybe they actually aren't - but the fact that the sound per word is _exactly_ the same, every time, possibly means that each sound is a precise substitute for the 'full' or 'slow' word, meaning that any introduced variation from a "natural" voice could actually make the 8x speech unintelligible.
Hope the author can shed a bit of light, it's so neat! I remember ~20 years ago the Sidekick (or a similar phone) seemed to be popular in blind communities because it also had settings to significantly speed up TTS, which someone let me listen to once, and it sounded just as foreign as the recording in TFA.
We actually have a sample of the Bosch in our office but haven’t come along to test it yet. Maybe with this call, I will get our team onto it.
The form factor has pros and cons in my opinion. The size and lower energy consumption definitely opens new applications but the problem is that it needs a clear field of view to do the measurements.
This could in turn restrict the applicability, eg as a wearable sensor.
In general I think it’s great to see innovations in the PM sensor field but often minimizations go on costs of accuracy.
We saw that for example with the Sensirion photo acoustic CO2 SCD4x sensor that is tiny but needs more black box algorithms to compensate for certain environmental conditions that then limits the range of applications.
My toddler recently went out on our roof to retrieve a football. I expected her to be a bit nervous, but she walked right up to the edge, no fear apparent at all. I had to desperately shove my instinct to yell for her down so I didn't scare her and distract her.
Deleted Comment
It's definitely not as frictionless as excalidraw though. Excalidraw, whilst not as powerful as draw.io has the interface down correctly.
Architecture diagrams, data flow diagrams, sequence diagrams, network diagrams, entity-relationship diagrams ...
I'd really like to find an option which can preferably be version controlled and doesn't require hard-to-remember schema (ex. plantUML).
At work it's always tough to find something which works, and which is free or already licensed (no chance to get new licenses), and which is easy enough for teammates of varying technical abilities to contribute to.
For Arch Diagrams, most people seem to jump to Draw.IO, which is nice, but I'm not sure how easily it can be version controlled (although I haven't tried). At work it usually falls into the "did you put your latest version on SharePoint" black-hole (we don't pay for the cloud syncing version of draw.io). I wanted to try Figma, since it's at least a bit more collaborative, but there aren't any good first-party templates, so maybe it's not the right place, either.
For DFDs, I'd like to try Mermaid, or D2, or PlantUML (scared by the syntax on that one, though). I've not tried any of these, right now we usually do these in draw.io too, but I feel like code-defined ones would be an easier to maintain option and can live in a repo easier.
Sequence Diagrams are currently usually done using the sequencediagram.org engine, which I'm not a huge fan of, but at least it's relatively easily handled text. I don't think there was a good VS Code integration last time I checked (I think it was some web emulator, not a built-in engine?).
ERDs, I'd also like to find a good local tool to probably just use SQL on the backend, so that it's one less conversion. I'm open to all suggestions for that, though.
I like to sear my steaks in a cast iron skillet. I use an induction cooktop and tend to start at medium-high and ramp up to as hot as the stove will get. I think the ramp-up is important to render some of the fat without just letting it all evaporate.
I turn the steaks over frequently (30s intervals), which keeps the inside from cooking too much while the outside gets nice and crispy. I take them off the heat probably 2-3 minutes (but keep flipping! The pan is still really damn hot) before they go into the oven (at 400F).
I take the steaks out when they hit 120 and pull them out of the pan ASAP.
During the “rest” that follows I add pepper and butter to the tops of the steaks. The outsides of these steaks become way, way hotter than the insides. But the size of the layer that is so hot is very thin due to the frequent turning. So they don’t need to rest long, and the temp doesn’t rise too much once they’re out of the oven.
If I want to edit a Google Doc, my edit must be expressed in terms of the Google Docs interface. If there's a Vim macro that could make that change in moments, it doesn't matter, because I'm not editing text: I'm consuming an API through a web UI. This is why, when I write a Google Doc, I usually draft it in Vim, compile to markdown, and then paste the formatted text into the doc. My clipboard offers me an interoperable workflow that the Google Docs UI does not.
This is why tools like Kubernetes use declarative yaml instead of interactive buttons and knobs. If your medium for communicating information is yaml, you can generate that yaml however you want, and update everything instantly. If your medium for communication is restricted to a set of buttons and dials, you're severely limited in terms of how you express information.
> OK, you're on Office 365 and I'm on Google - so we'll have to work a little harder to set up access.
You're going to have to use files, is what. You're going to download your Google Doc as a .docx file and upload it into 365. And then, unless you want to convert to .docx again, you're going to have to switch to 365 entirely. Google and Microsoft cannot interoperate imperatively with each other's APIs, but they can both parse the same declarative document files.
"When should we pass data vs exposing an interactive interface" is an essential question in software architecture, and someone's making the same mistake here that you see in so much over-abstracted enterprise Java. Sometimes an interface just makes things worse.
I was almost happy when I saw Google Docs has Gemini built in, until I realized it was just another lame model (my go-to has been Gemini Pro 2.5 but I think Docs uses Flash 2.0 or another low-cost model with no option to upgrade).