Importantly, there is no "wilful" requirement and it applies to all directors, not just those who actively participated in misconduct. If you weren't involved, you have to prove that you actively tried to stop it, or that you weren't managing the company for a specific reason such as being sick. You were the director who mostly turned up to board meetings to help them meet quorum, you trusted the other directors on the board had things under control and you were completely unaware of the debts? Too bad, liable. You hired external advisers, delegated to them, and they didn't do it? Too bad! You decided to wash your hands of the whole thing, and resigned from the board, but didn't actively try to rectify the situation first? Yep, they're still coming for you.
(I believe the criminal convictions with prison time only really kick in for those who actively participated in tax fraud or who refuse to pay their director penalties.)
- I hope they succeed and eventually deliver a solid version of this product - verifiable photography is going to become important, and it's good to see startups working on this - While I'm sure some artists will like the idea of verifiable photography, the applications that matter to me are any kind of photography that has the potential to end up in a news article or in court - Selling what is essentially a prototype is fine, it's extremely obvious that's what it is, they explicitly say it! Who cares if it's not very good as a camera? - The almost complete lack of information on their site about their security model or how their ZKPs work is not particularly encouraging - It follows that my faith that either the cryptography or the hardware anti-tamper measures in this beta device would stand up to even some decent amateurs, given a couple of weeks to have a crack at it, is not high. I'm almost tempted to buy one just to see how far I, a random kernel engineer who gets modestly decent scores at my local hacker con CTF, could get. But I may well be completely underestimating them! Hard to tell with the fairly scarce information - Why did they pick a name that's similar to a) AMD's GPU stack, and b) the law enforcement/natsec computer vision business, ROC (https://roc.ai)?
It is a power used very sparingly, even though legally it is unlimited - the state of New South Wales is, as far as I know, the only one which publishes details about uses of the pardon power; in an average year there are 0 successful pardon/commutation applicants, and it's an exceptionally merciful year if they grant 2 or more. Other states and the federal government may or may not be a bit more generous, but we're talking very small numbers. Most pardons are for reasons of unsafe convictions where for whatever reason no remaining avenues of appeal are available (rare, these days, because each state has introduced laws to enable post-conviction reviews).
Historically, particularly in the 19th century convict era, the pardon power was much more important, and was indeed abused for political reasons on a number of occasions, but it seems that for the most part it quietly exists in the background and only gets significant public attention once every blue moon for a high-profile murder case or similar.
What explains the difference? Is it the requirement for sign-off by the King's viceroys that prevents abuse? Collective Cabinet governance that is accountable to Parliament? Maybe our political culture means politicians' friends tend to end up in prison less often and thus there's less opportunity for the abuse of pardons specifically? It's not particularly clear to me - if anyone's got some good comparative studies send me links!
Correct me if I'm wrong, but it sounds like this definition covers basically all automation of any kind. Like, a dumb lawnmower responds to the input of the throttle lever and the kill switch and generates an output of a spinning blade which influences the physical environment, my lawn.
> “Catastrophic risk” means a foreseeable and material risk that a large developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident, scheme, or course of conduct involving a dangerous capability.
I had a friend that cut his toe off with a lawnmower. I'm pretty sure more than 50 people a year injure themselves with lawn mowers.
In any case, that definition is only used to further define "foundation model": "an artificial intelligence model that is all of the following: (1) Trained on a broad data set. (2) Designed for generality of output. (3) Adaptable to a wide range of distinctive tasks." This legislation is very clearly not supposed to cover your average ML classifier.
It’s that the paper-using doctor can spend more time on you, the patient, instead of fighting with a balky UI and inane business rules.
Meanwhile, I had a similar prescription, from a different specialist, who issues his prescriptions as either e-scripts or computer-generated paper scripts depending on patient preference. I suspect his practice management software would stop him from making this class of mistake entirely.
I get why a doctor might prefer to avoid the computer, but I think my relative would have preferred their doctor not screwing up on something basic and wasting a significant amount of their time over better vibes in a consult.
At my house, it's a 140 mile round trip between the fulfillment center ("are you feeling fulfilled yet?") and the drop off location.
OTOH, there's likely more of "you" than there are of "me" ...
While Amazon is efficient, "fractions of a cent" is probably the wrong order of magnitude for even the most efficient order.