AI (especially LLMs) will likely stay top-of-mind in 2026, but I expect costs to drop meaningfully and capabilities to feel more “infrastructure-like” rather than magical. SME Adoption will drive the AI to masses.
If AI doesn’t meet near-term revenue and productivity promises, we may see pressure or stagnation in tech valuations, even as the underlying technology continues to improve. In other words, the market may cool before the tech does.
On the macro side, I wouldn’t be surprised if we see more market stability or mild declines, driven by a re-rating of expectations rather than a systemic collapse. Capital might rotate from speculative growth into cash-flow-positive businesses that actually deploy AI profitably.
More broadly, I think 2026 will reward: reliability over flashy innovation in AI , Engineering depth over marketing narratives, Systems thinking over isolated “features”
Less “what’s possible?” and more “what actually works at scale?”
I’m trying to surface and study those scattered examples—especially the ones that explain why decisions were made, not just what was built.
but to specific is much important, imo Engineering means "Solving problem at a scale", irrelevant of the industry.
lawyers care about chain of custody, auditability, and immutability makes this less of an “AI app” and more of a compliance workflow tool, which might matter a lot for positioning.
On B2C vs B2B: individuals feel this once, lawyers feel it every case — which usually determines who actually pays.
The biggest risk seems less about accuracy and more about how courts classify the output (calculator vs expert opinion). That likely drives both liability and pricing.
Have you run this past a practicing family lawyer or forensic accountant yet, even informally?