1. backups and account recovery: We’re working with humans here. They will lose their keys in great numbers, sometimes into the hands of malicious actors. How do users then recover their credentials in a quick and reliable manner?
2. Fragmentation: let’s be optimistic and say digital credentials for drivers licenses are given out by _only_ 50 entities (one per State). Assuming we don’t have a single federal format for them (read: politically infeasible national id) how does facebook, let alone some rando startup, handle parsing and authenticating all these different credential formats? Oh and they can change at any time, due to some rando political issue in the given state.
OP, you clearly know all this, so I’m just reminding you as someone down in the identity trenches.
2.The data format issue is (or was) indeed a concern though it was never insurmountable. A data dictionary would have been the most straight forward approach to address it: https://cipheredtrust.com/doc/#data-processing
I say data format discernment
was a concern because as faith would have it, we now have the perfect tech to address that, LLMs. You can shove any data format into an LLM and it will spit out a transformation into what you are looking for without the need to know the source format.Browsers are integrating LLM features as APIs so this type of use would be feasible both for front and back end tasks.
Every time I hear about some dumb approach to age verification (conversation analysis...really?) or a romance scam story because of a fraudster somewhere in Malaysia..I have the need to scream...THERE IS A CORRECT SOLUTION.
https://news.ycombinator.com/item?id=44723418
It is also highly compatible with the internet both in terms of technical/performance scalability and utility scalability (you can use it for just about any information verification need in any kind of application).
Context pollution is a serious problem - I love that you use that term as well.
Have you had good feedback for your fork-off implementation?
Yes it has proven quite a useful feature. Primarily for the reason stated above, allowing users to get a full log of what's going on in the same session that the core task is taking place.
We also use it extensively to facilitate back-and-forth conversation with the agents, for instance a lot of our human-in-loop capabilities rely on the forking functionality...the scope of its utility has been frankly surprising :)
In Solvent, the main utility is allowing forked-off use of the same session without context pollution.
For instance a coding assistant session can be used to generate a checklist as a fork and then followed by the core task of writing code. This allows the human user to see the related flows (checklist gen,requirements gen,coding...etc) in chronological order without context pollution.
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
https://codesolvent.com/botworx/intelligent-workspace/