Also, I wanted to "test" my work before deploying it, but that seems not possible?
Yes - at present, a note needs to be deployed before you can test. Then you can make a change, and redeploy. We can look at making this a bit more seamless in future iterations.
Re: the AI button - any suggestion on how we might improve there?
(I also don’t play videos or animations, and try to ignore them if they autoplay. They are usually too fast or too slow. A slideshow that you step through is okay.)
Maybe I’m not your target audience, though?
Point well taken and we'll definitely add more details/docs on the site.
Thanks!
Otherwise, it seems like “guess the verb” in a text adventure.
[Oops, edited before I saw the reply.]
We're doing a few things to try to guide the user:
1. We include ~26 fully-formed "Example" and "Template" notes in the app - these are intended to show users what they can do, but they can also be copied and used as-is or edited.
2. When users write new notes, we use a custom keyboard tool called "AutoPrompt" to show the user what actions they can take in each section. It includes prompts like "Search reddit for...", etc. that is inserted into the text entry and the user completes the statement. (You can see this in the vid on our home page).
3. If users write incorrect or wrong things in the note, they get a response back that instructs them what they need to correct.
Admittedly, as a notes-based interface there's a tradeoff we're trying to balance on flexibility vis-a-vis unstructured text input, and guardrails to inform the user. Appreciate any further feedback you can share!