This is a part that probably varies a lot individually...I'm a slow thinker, and have a hard time reacting to fully formed ideas thrown at me thrown at me over a table. The other side coming with fully prepared for the discussion, while on the receiving end you need to do your homework realtime also doesn't help.
If it's random brainstorming it doesn't matter a lot. If it's an actually important topics, it feels like a DDOS on your brain when you actually want to take time to understand the implications. Then sure, some situations require speed, but I don't see that many of them in day to day life. "What you think about moving to X framework for our metrics ?" can probably wait for 30 min. and not be a realtime exchange.
> it's very uncomfortable to be on a zoom call for more than about 45 minutes at a time.
I've finding these meetings way more comfortable in remote...someone can be rambling for as long as they need, the effect on everyone's productivity is that much reduced. Same for people who speak very slowly actually, it's a lot less stress IMO and easier to engage when paradoxically you don't need to be 100% looking at them in the eyes.
That's the feature, not a bug. It gives the meeting initiator a better chance of steamrolling their way to approval.
Notice the difference between meetings where people share agendas and docs ahead of time vs those where you show up and they walk you through the doc.
Sadly the laptop's too small and Chrome's too power-hungry for solar power to be an option: ⅓m width * ¼m length * 158W/m2 solar = 13W out of 67W to run it, or 19%.
For example, make a login management framework that is feature-complete and does not require the dev to implement their own "hooks" into its methods. Instead use a config file to tell the framework how to work (expose this HTTP endpoint, use this data backend, etc) and just send it data in the way it expects, and have it respond with booleans. I assume devs might hate this, but it does give the business what it needs without relying on devs to implement it correctly (or have to wait on them to do it).
There are some overcomplicated examples of this already (keycloak) but we could make simpler things too that are secure, and more of them. Particularly I think SQL frameworks, REST API frameworks, HTTP daemons, container image builders, Cloud authentication methods, Git repositories, etc could easily implement stronger guardrails to force development to be secure by default.
The fact that most Cloud software today still tells devs to give it a static infinitely-lived authentication key is absurd. That just should not be possible; take that shit out of the software. We can do way better.
Who's we? Who are they integrating with? A protocol? A business? A government?
This has been tried in a multitude of ways. There's always a bit too much friction or cost.
It's rather surprising to me that at least some of the US states aren't more data protection–oriented on this stuff.
So I don't know where this conspiracy theory comes from, but it's entirely unfounded unless you're talking about a designer like Jony Ive with the power to go through multiple obstacles with ease, in which case the change isn't happening to prove he's needed but because he has a different vision.
Definitely not the case everywhere.
These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).
Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.
I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:
All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.
Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.
We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.