Maybe I'm overthinking this, but shouldn't there be some transparency about what you're building on top of? Especially with open source projects that have attribution requirements? I get that it's still early days, but this feels like a pretty basic thing to get right.
Anyone else notice this or know if this is standard practice? Just seems odd to me that they're not being upfront about the foundation they're building on.
Whole process: https://www.pgschema.com/blog/demo-with-manus-and-asciinema Replay: https://manus.im/share/8fEln1OzxpnsRSU1PnHweG?replay=1
Speaking of which, an interesting thing to contemplate is if it is worth automating what you did, or if making the videos happens rarely enough that you'd start from scratch with a new manus or other ai session.
How do I give it a base URL for API calls so I can point it at my ollama server?
llm install llm-ollama
and then you use whatever model you like, anything llm has installed. See https://llm.datasette.io/en/stable/plugins/installing-plugin... for plugin install info.Here is a sample session. You can't see it ... but it is very slow on my CPU-only non-apple machine (each response took like 30 seconds) :)
>>> from gremllm import Gremllm
>>> counter = Gremllm("counter", model="gemma3n:e2b")
>>> counter.value = 5
>>> counter.increment()
>>> counter.value
None
>>> counter.value
None
>>> counter.value
None
>>> counter.what_is_your_total()
6
6
... also I don't know why it kept saying my value is None :) . The "6" is doubled because one must have been a print and the other is the return value.…until your comment. Here! Take my “lived through the 80’s and 90’s” card.