In my opinion, believing that adherence to processes is beneficial is a flawed thing, applicable mainly to highly repetitive and mundane tasks. For example now I am 100% focused on expediting the progress of projects that I start, including AssistOS (blog.assistos.net) initiated this year. Anticipating a period of considerable chaos ahead, my sole process involves a firm belief in the utility of the project. Additionally, I somewhat subscribe to a Kabbalistic approach to life, where the intention of contributing positively to the world invariably brings reciprocal rewards. I am committed to adapting to every emerging opportunity to advance the execution of my projects as effectively as possible. Act quickly on what you believe will benefit both you and others. Don't overthink or aim for perfection. You'll make many mistakes, but that's okay; it's the best way to learn quickly.
After an editor, who has edited millions of pages and seems to be a jack-of-all-trades, unjustifiably rejects your contribution on a topic where you have dozens of scientific articles published, the only conclusion is that the system is flawed. There's a need for a fundamental change in approach, probably to a system where censorship exists only in cases of clearly illegal content, and various opinions are allowed to be expressed. On the other hand, to filter out the noise, there's a need for a trust propagation system among editors and viewers, so that each time, you get the most probable form of a page based on the trust given to direct contacts and indirectly to recursive contacts. Maybe AI could also help a bit. Who dare to start a new Wikipedia ;) ?
Exciting concept! Empowering employees for bold decisions with accountability can boost org effectiveness. Crafting a credit system for personal and institutional risk assurance is a thrilling research. But "important" persons are probably bussy managing their curent brand and social influence...
We are entering an era where AI can potentially enhance moderation. How can we ensure it aids rather than hinders? Imagine AI not as a lazy censor, but as a tool capable of discerning the value of diverse contributions. Unlike a Wikipedia editor who might be overwhelmed by thousands of articles across numerous domains, AI could objectively evaluate scientific results in peer-reviewed journals. It could also connect current discussions with past contributions and provide gentle, rule-based corrections. Could this be the future of fair and efficient online moderation?
Capturing a billion or even trillion-dollar value proposition awaits entrepreneurs who seize the opportunity to implement AI moderators in platforms like Wikipedia, search engines, and social networks. This disruptive potential beckons ambitious innovators to enter the market, positioning themselves to challenge and reshape the industry while creating substantial economic value.
It will kill startups working on AI Agents, it is already clear from the last updates that they compete with the APIs customers and that is bad for the creative startups. The world require better open source LLLm models. Akso the startups should lwarn to collaborate and not reinvent everything, but how to do it is an unsolved challenge in the current capitalist economic model...
They do not invest much in complex UI, everything they ship is based on the chat methaphor and the inteligence of their LLMs. Clever. But sooner or later the AI should go to the level if operating systems or at least should replace all kinds of applications and interfere with complex workflows. The details and difficulty to decide what is opinion and what is obhective fact will slowdown everything as for everybody else.