The key distinguishing characteristic is the fact that every git checkout contains the full repo history and metadata. This means a consistent network connection to the master server isn't necessary. In fact, it means that the concept of a "master server" itself isn't necessary. With Git, you only need to connect to other servers when you pull down changes or when you want to push them back up to the remote repository. You can happily commit, branch, revert, check out older revisions, etc. on just your local checkout without needing to care about what's going on with the remote server. Even if you treat your remote repo on GitHub as your "master", it's still a far cry from the way that centralized VCS works.
If you've never worked with true centralized VCS, it's easy to take this for granted. Working offline with a system like Perforce or SVN is technically possible but considerably more involved, and most people avoid doing it because it puts you far off of the beaten path of how those systems are typically used. It basically involves you having to run a local server for a while, and then later painfully merging/reconciling your changes with the master. It's far more tedious than doing the equivalent work in Git.
Now, it's important to note that Git's notion of "every checkout contains all the repo data" doesn't work well if the repo contents become too large. It's for that reason that things like sparse checkouts, git-lfs, and VFS for Git exist. These sorts of extensions do turn Git into something of a hybrid VCS system, in between a true centralized and a true decentralized system.
If you want to understand more, here's a great tech talk by Linus himself from 2007. It's of note because in 2007 DVCS was very new on the scene, and basically everyone at the time was using centralized VCS like SVN, CVS, Perforce, ClearCase, etc.
I think it would be possible to have a DVCS without the full repo history and metadata. Doubt that it would be worth the effort though.
This is less of a mind-was-changed case and more just controversial, but... Checked Exceptions were a fundamentally good idea. They just needed some syntactic sugar to help redirect certain developers into less self-destructive ways of procrastinating on proper error handling.
In brief for non-Java folks: Checked Exceptions are a subset of all Exceptions. To throw them, they must be part of the function's type signature. To call that function, the caller code must make some kind of decision about what to do when that Checked Exception arrives. [0] It's basically another return type for the method, married with the conventions and flow-control features of Exceptions.
[0] Ex: Let it bubble up unimpeded, adding it to your own function signature; catch it and wrap it in your own exception with a type more appropriate to the layer of abstraction; catch it and log it; catch it and ignore it... Alas, many caught it and wrapped it in a generic RuntimeException.
I work on an Inversion of Control system integration framework on top of a herd of business logic passing messages between systems. If I were to do all over again, then I’d have the business logic:
* return success or failure (invalid input)
* throw exception with expectation that it might work in the near future (timeout), with advice on how long to wait to retry, and how many retries before escalating
* throw exception with expectation that a person needs to check things out (authentication failure)
Unless the business logic catches it, unchecked exceptions are a failure. Discussion about what is what kind of exception is hard, but the business owners usually have strong opinions taking me off the hook.